13 Search Results for "Kawase, Yasushi"


Document
The Last Success Problem with Samples

Authors: Toru Yoshinaga and Yasushi Kawase

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
The last success problem is an optimal stopping problem that aims to maximize the probability of stopping on the last success in a sequence of independent n Bernoulli trials. In the classical setting where complete information about the distributions is available, Bruss [Bruss, 2000] provided an optimal stopping policy that ensures a winning probability of 1/e. However, assuming complete knowledge of the distributions is unrealistic in many practical applications. This paper investigates a variant of the last success problem where samples from each distribution are available instead of complete knowledge of them. When a single sample from each distribution is allowed, we provide a deterministic policy that guarantees a winning probability of 1/4. This is best possible by the upper bound provided by Nuti and Vondrák [Nuti and Vondr{á}k, 2023]. Furthermore, for any positive constant ε, we show that a constant number of samples from each distribution is sufficient to guarantee a winning probability of 1/e-ε.

Cite as

Toru Yoshinaga and Yasushi Kawase. The Last Success Problem with Samples. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 105:1-105:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{yoshinaga_et_al:LIPIcs.ESA.2024.105,
  author =	{Yoshinaga, Toru and Kawase, Yasushi},
  title =	{{The Last Success Problem with Samples}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{105:1--105:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.105},
  URN =		{urn:nbn:de:0030-drops-211762},
  doi =		{10.4230/LIPIcs.ESA.2024.105},
  annote =	{Keywords: The Last Success Problem, Secretary Problem, Sample Information Model, Optimal Stopping, Online Algorithms}
}
Document
APPROX
On the Generalized Mean Densest Subgraph Problem: Complexity and Algorithms

Authors: Karthekeyan Chandrasekaran, Chandra Chekuri, Manuel R. Torres, and Weihao Zhu

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
Dense subgraph discovery is an important problem in graph mining and network analysis with several applications. Two canonical polynomial-time solvable problems here are to find a maxcore (subgraph of maximum min degree) and to find a densest subgraph (subgraph of maximum average degree). Both of these problems can be solved in polynomial time. Veldt, Benson, and Kleinberg [Veldt et al., 2021] introduced the generalized p-mean densest subgraph problem which captures the maxcore problem when p = -∞ and the densest subgraph problem when p = 1. They observed that for p ≥ 1, the objective function is supermodular and hence the problem can be solved in polynomial time. In this work, we focus on the p-mean densest subgraph problem for p ∈ (-∞, 1). We prove that for every p ∈ (-∞,1), the problem is NP-hard, thus resolving an open question from [Veldt et al., 2021]. We also show that for every p ∈ (0,1), the weighted version of the problem is APX-hard. On the algorithmic front, we describe two simple 1/2-approximation algorithms for every p ∈ (-∞, 1). We complement the approximation algorithms by exhibiting non-trivial instances on which the algorithms simultaneously achieve an approximation factor of at most 1/2.

Cite as

Karthekeyan Chandrasekaran, Chandra Chekuri, Manuel R. Torres, and Weihao Zhu. On the Generalized Mean Densest Subgraph Problem: Complexity and Algorithms. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 9:1-9:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chandrasekaran_et_al:LIPIcs.APPROX/RANDOM.2024.9,
  author =	{Chandrasekaran, Karthekeyan and Chekuri, Chandra and Torres, Manuel R. and Zhu, Weihao},
  title =	{{On the Generalized Mean Densest Subgraph Problem: Complexity and Algorithms}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{9:1--9:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.9},
  URN =		{urn:nbn:de:0030-drops-210025},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.9},
  annote =	{Keywords: Densest subgraph problem, Hardness of approximation, Approximation algorithms}
}
Document
Algorithms and Complexity for Path Covers of Temporal DAGs

Authors: Dibyayan Chakraborty, Antoine Dailly, Florent Foucaud, and Ralf Klasing

Published in: LIPIcs, Volume 306, 49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024)


Abstract
A path cover of a digraph is a collection of paths collectively containing its vertex set. A path cover with minimum cardinality for a directed acyclic graph can be found in polynomial time [Fulkerson, AMS'56; Cáceres et al., SODA'22]. Moreover, Dilworth’s celebrated theorem on chain coverings of partially ordered sets equivalently states that the minimum size of a path cover of a DAG is equal to the maximum size of a set of mutually unreachable vertices. In this paper, we examine how far these classic results can be extended to a dynamic setting. A temporal digraph has an arc set that changes over discrete time-steps; if the underlying digraph is acyclic, then it is a temporal DAG. A temporal path is a directed path in the underlying digraph, such that the time-steps of arcs are strictly increasing along the path. Two temporal paths are temporally disjoint if they do not occupy any vertex at the same time. A temporal path cover is a collection 𝒞 of temporal paths that covers all vertices, and 𝒞 is temporally disjoint if all its temporal paths are pairwise temporally disjoint. We study the computational complexities of the problems of finding a minimum-size temporal (disjoint) path cover (denoted as Temporal Path Cover and Temporally Disjoint Path Cover). On the negative side, we show that both Temporal Path Cover and Temporally Disjoint Path Cover are NP-hard even when the underlying DAG is planar, bipartite, subcubic, and there are only two arc-disjoint time-steps. Moreover, Temporally Disjoint Path Cover remains NP-hard even on temporal oriented trees. We also observe that natural temporal analogues of Dilworth’s theorem on these classes of temporal DAGs do not hold. In contrast, we show that Temporal Path Cover is polynomial-time solvable on temporal oriented trees by a reduction to Clique Cover for (static undirected) weakly chordal graphs (a subclass of perfect graphs for which Clique Cover admits an efficient algorithm). This highlights an interesting algorithmic difference between the two problems. Although it is NP-hard on temporal oriented trees, Temporally Disjoint Path Cover becomes polynomial-time solvable on temporal oriented lines and temporal rooted directed trees. Motivated by the hardness result on trees, we show that, in contrast, Temporal Path Cover admits an XP time algorithm with respect to parameter t_max + tw, where t_max is the maximum time-step and tw is the treewidth of the underlying static undirected graph; moreover, Temporally Disjoint Path Cover admits an FPT algorithm with respect to the same parameterization.

Cite as

Dibyayan Chakraborty, Antoine Dailly, Florent Foucaud, and Ralf Klasing. Algorithms and Complexity for Path Covers of Temporal DAGs. In 49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 306, pp. 38:1-38:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chakraborty_et_al:LIPIcs.MFCS.2024.38,
  author =	{Chakraborty, Dibyayan and Dailly, Antoine and Foucaud, Florent and Klasing, Ralf},
  title =	{{Algorithms and Complexity for Path Covers of Temporal DAGs}},
  booktitle =	{49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024)},
  pages =	{38:1--38:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-335-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{306},
  editor =	{Kr\'{a}lovi\v{c}, Rastislav and Ku\v{c}era, Anton{\'\i}n},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2024.38},
  URN =		{urn:nbn:de:0030-drops-205940},
  doi =		{10.4230/LIPIcs.MFCS.2024.38},
  annote =	{Keywords: Temporal Graphs, Dilworth’s Theorem, DAGs, Path Cover, Temporally Disjoint Paths, Algorithms, Oriented Trees, Treewidth}
}
Document
Track A: Algorithms, Complexity and Games
Minimizing Symmetric Convex Functions over Hybrid of Continuous and Discrete Convex Sets

Authors: Yasushi Kawase, Koichi Nishimura, and Hanna Sumita

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
We study the problem of minimizing a given symmetric strictly convex function over the Minkowski sum of an integral base-polyhedron and an M-convex set. This problem has a hybrid of continuous and discrete structures. This emerges from the problem of allocating mixed goods, consisting of both divisible and indivisible goods, to agents with binary valuations so that the fairness measure, such as the Nash welfare, is maximized. It is known that both an integral base-polyhedron and an M-convex set have similar and nice properties, and the non-hybrid case can be solved in polynomial time. While the hybrid case lacks some of these properties, we show the structure of an optimal solution. Moreover, we exploit a proximity inherent in the problem. Through our findings, we demonstrate that our problem is NP-hard even in the fair allocation setting where all indivisible goods are identical. Moreover, we provide a polynomial-time algorithm for the fair allocation problem when all divisible goods are identical.

Cite as

Yasushi Kawase, Koichi Nishimura, and Hanna Sumita. Minimizing Symmetric Convex Functions over Hybrid of Continuous and Discrete Convex Sets. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 96:1-96:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{kawase_et_al:LIPIcs.ICALP.2024.96,
  author =	{Kawase, Yasushi and Nishimura, Koichi and Sumita, Hanna},
  title =	{{Minimizing Symmetric Convex Functions over Hybrid of Continuous and Discrete Convex Sets}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{96:1--96:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.96},
  URN =		{urn:nbn:de:0030-drops-202393},
  doi =		{10.4230/LIPIcs.ICALP.2024.96},
  annote =	{Keywords: Integral base-polyhedron, Fair allocation, Matroid}
}
Document
Track A: Algorithms, Complexity and Games
Problems on Group-Labeled Matroid Bases

Authors: Florian Hörsch, András Imolay, Ryuhei Mizutani, Taihei Oki, and Tamás Schwarcz

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
Consider a matroid equipped with a labeling of its ground set to an abelian group. We define the label of a subset of the ground set as the sum of the labels of its elements. We study a collection of problems on finding bases and common bases of matroids with restrictions on their labels. For zero bases and zero common bases, the results are mostly negative. While finding a non-zero basis of a matroid is not difficult, it turns out that the complexity of finding a non-zero common basis depends on the group. Namely, we show that the problem is hard for a fixed group if it contains an element of order two, otherwise it is polynomially solvable. As a generalization of both zero and non-zero constraints, we further study F-avoiding constraints where we seek a basis or common basis whose label is not in a given set F of forbidden labels. Using algebraic techniques, we give a randomized algorithm for finding an F-avoiding common basis of two matroids represented over the same field for finite groups given as operation tables. The study of F-avoiding bases with groups given as oracles leads to a conjecture stating that whenever an F-avoiding basis exists, an F-avoiding basis can be obtained from an arbitrary basis by exchanging at most |F| elements. We prove the conjecture for the special cases when |F| ≤ 2 or the group is ordered. By relying on structural observations on matroids representable over fixed, finite fields, we verify a relaxed version of the conjecture for these matroids. As a consequence, we obtain a polynomial-time algorithm in these special cases for finding an F-avoiding basis when |F| is fixed.

Cite as

Florian Hörsch, András Imolay, Ryuhei Mizutani, Taihei Oki, and Tamás Schwarcz. Problems on Group-Labeled Matroid Bases. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 86:1-86:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{horsch_et_al:LIPIcs.ICALP.2024.86,
  author =	{H\"{o}rsch, Florian and Imolay, Andr\'{a}s and Mizutani, Ryuhei and Oki, Taihei and Schwarcz, Tam\'{a}s},
  title =	{{Problems on Group-Labeled Matroid Bases}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{86:1--86:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.86},
  URN =		{urn:nbn:de:0030-drops-202299},
  doi =		{10.4230/LIPIcs.ICALP.2024.86},
  annote =	{Keywords: matroids, matroid intersection, congruency constraint, exact-weight constraint, additive combinatorics, algebraic algorithm, strongly base orderability}
}
Document
Online Scheduling on Identical Machines with a Metric State Space

Authors: Hiromichi Goko, Akitoshi Kawamura, Yasushi Kawase, Kazuhisa Makino, and Hanna Sumita

Published in: LIPIcs, Volume 219, 39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022)


Abstract
This paper introduces an online scheduling problem on m identical machines with a metric state space, which generalizes the classical online scheduling problem on identical machines, the online traveling salesman problem, and the online dial-a-ride problem. Each job is associated with a source state, a destination state, a processing time, and a release time. Each machine can process a job on and after its release time. Before processing a job, a machine needs to change its state to the source state (in a time corresponding to the distance), and after the process of the job, the machine’s state becomes the destination state. While related research deals with a model in which only release times are unknown to the algorithm, this paper focuses on a general model in which destination states and processing times are also unknown. The main result of this paper is to propose a O(log m/log log m)-competitive online algorithm for the problem, which is best possible. A key approach is to divide the difficulty of the problem. To cope with unknown release times, we provide frameworks to produce a min{2ρ+1/2, ρ+2}-competitive algorithm using a ρ-competitive algorithm for a basic case where all jobs are released at time 0. Then, focusing on unknown destination states and processing times, we construct an O(log m/log log m)-competitive algorithm for the basic case. We also provide improved algorithms for some special cases.

Cite as

Hiromichi Goko, Akitoshi Kawamura, Yasushi Kawase, Kazuhisa Makino, and Hanna Sumita. Online Scheduling on Identical Machines with a Metric State Space. In 39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 219, pp. 32:1-32:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{goko_et_al:LIPIcs.STACS.2022.32,
  author =	{Goko, Hiromichi and Kawamura, Akitoshi and Kawase, Yasushi and Makino, Kazuhisa and Sumita, Hanna},
  title =	{{Online Scheduling on Identical Machines with a Metric State Space}},
  booktitle =	{39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022)},
  pages =	{32:1--32:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-222-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{219},
  editor =	{Berenbrink, Petra and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2022.32},
  URN =		{urn:nbn:de:0030-drops-158421},
  doi =		{10.4230/LIPIcs.STACS.2022.32},
  annote =	{Keywords: Online scheduling, Competitive analysis, Online dial-a-ride}
}
Document
Online Knapsack Problems with a Resource Buffer

Authors: Xin Han, Yasushi Kawase, Kazuhisa Makino, and Haruki Yokomaku

Published in: LIPIcs, Volume 149, 30th International Symposium on Algorithms and Computation (ISAAC 2019)


Abstract
In this paper, we introduce online knapsack problems with a resource buffer. In the problems, we are given a knapsack with capacity 1, a buffer with capacity R >= 1, and items that arrive one by one. Each arriving item has to be taken into the buffer or discarded on its arrival irrevocably. When every item has arrived, we transfer a subset of items in the current buffer into the knapsack. Our goal is to maximize the total value of the items in the knapsack. We consider four variants depending on whether items in the buffer are removable (i.e., we can remove items in the buffer) or non-removable, and proportional (i.e., the value of each item is proportional to its size) or general. For the general&non-removable case, we observe that no constant competitive algorithm exists for any R >= 1. For the proportional&non-removable case, we show that a simple greedy algorithm is optimal for every R >= 1. For the general&removable and the proportional&removable cases, we present optimal algorithms for small R and give asymptotically nearly optimal algorithms for general R.

Cite as

Xin Han, Yasushi Kawase, Kazuhisa Makino, and Haruki Yokomaku. Online Knapsack Problems with a Resource Buffer. In 30th International Symposium on Algorithms and Computation (ISAAC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 149, pp. 28:1-28:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{han_et_al:LIPIcs.ISAAC.2019.28,
  author =	{Han, Xin and Kawase, Yasushi and Makino, Kazuhisa and Yokomaku, Haruki},
  title =	{{Online Knapsack Problems with a Resource Buffer}},
  booktitle =	{30th International Symposium on Algorithms and Computation (ISAAC 2019)},
  pages =	{28:1--28:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-130-6},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{149},
  editor =	{Lu, Pinyan and Zhang, Guochuan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2019.28},
  URN =		{urn:nbn:de:0030-drops-115241},
  doi =		{10.4230/LIPIcs.ISAAC.2019.28},
  annote =	{Keywords: Online knapsack problem, Resource augmentation, Competitive analysis}
}
Document
Optimal Matroid Partitioning Problems

Authors: Yasushi Kawase, Kei Kimura, Kazuhisa Makino, and Hanna Sumita

Published in: LIPIcs, Volume 92, 28th International Symposium on Algorithms and Computation (ISAAC 2017)


Abstract
This paper studies optimal matroid partitioning problems for various objective functions. In the problem, we are given a finite set E and k weighted matroids (E, \mathcal{I}_i, w_i), i = 1, \dots, k, and our task is to find a minimum partition (I_1,\dots,I_k) of E such that I_i \in \mathcal{I}_i for all i. For each objective function, we give a polynomial-time algorithm or prove NP-hardness. In particular, for the case when the given weighted matroids are identical and the objective function is the sum of the maximum weight in each set (i.e., \sum_{i=1}^k\max_{e\in I_i}w_i(e)), we show that the problem is strongly NP-hard but admits a PTAS.

Cite as

Yasushi Kawase, Kei Kimura, Kazuhisa Makino, and Hanna Sumita. Optimal Matroid Partitioning Problems. In 28th International Symposium on Algorithms and Computation (ISAAC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 92, pp. 51:1-51:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{kawase_et_al:LIPIcs.ISAAC.2017.51,
  author =	{Kawase, Yasushi and Kimura, Kei and Makino, Kazuhisa and Sumita, Hanna},
  title =	{{Optimal Matroid Partitioning Problems}},
  booktitle =	{28th International Symposium on Algorithms and Computation (ISAAC 2017)},
  pages =	{51:1--51:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-054-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{92},
  editor =	{Okamoto, Yoshio and Tokuyama, Takeshi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2017.51},
  URN =		{urn:nbn:de:0030-drops-82712},
  doi =		{10.4230/LIPIcs.ISAAC.2017.51},
  annote =	{Keywords: Matroids, Partitioning problem, PTAS, NP-hardness}
}
Document
Optimal Stopping Rules for Sequential Hypothesis Testing

Authors: Constantinos Daskalakis and Yasushi Kawase

Published in: LIPIcs, Volume 87, 25th Annual European Symposium on Algorithms (ESA 2017)


Abstract
Suppose that we are given sample access to an unknown distribution p over n elements and an explicit distribution q over the same n elements. We would like to reject the null hypothesis "p=q" after seeing as few samples as possible, when p =/= q, while we never want to reject the null, when p=q. Well-known results show that Theta(sqrt{n}/epsilon^2) samples are necessary and sufficient for distinguishing whether p equals q versus p is epsilon-far from q in total variation distance. However, this requires the distinguishing radius epsilon to be fixed prior to deciding how many samples to request. Our goal is instead to design sequential hypothesis testers, i.e. online algorithms that request i.i.d. samples from p and stop as soon as they can confidently reject the hypothesis p=q, without being given a lower bound on the distance between p and q, when p =/= q. In particular, we want to minimize the number of samples requested by our tests as a function of the distance between p and q, and if p=q we want the algorithm, with high probability, to never reject the null. Our work is motivated by and addresses the practical challenge of sequential A/B testing in Statistics. We show that, when n=2, any sequential hypothesis test must see Omega(1/{d_{tv}(p,q)^2} log log 1/{d_{tv}(p,q)}) samples, with high (constant) probability, before it rejects p=q, where d_{tv}(p,q) is the - unknown to the tester - total variation distance between p and q. We match the dependence of this lower bound on d_{tv}(p,q) by proposing a sequential tester that rejects p=q from at most O({\sqrt{n}}/{d_{tv}(p,q)^2}log log 1/{d_{tv}(p,q)}) samples with high (constant) probability. The Omega(sqrt{n}) dependence on the support size n is also known to be necessary. We similarly provide two-sample sequential hypothesis testers, when sample access is given to both p and q, and discuss applications to sequential A/B testing.

Cite as

Constantinos Daskalakis and Yasushi Kawase. Optimal Stopping Rules for Sequential Hypothesis Testing. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 32:1-32:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{daskalakis_et_al:LIPIcs.ESA.2017.32,
  author =	{Daskalakis, Constantinos and Kawase, Yasushi},
  title =	{{Optimal Stopping Rules for Sequential Hypothesis Testing}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{32:1--32:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.32},
  URN =		{urn:nbn:de:0030-drops-78237},
  doi =		{10.4230/LIPIcs.ESA.2017.32},
  annote =	{Keywords: property testing, sequential hypothesis testing, A/B testing}
}
Document
Surrogate Optimization for p-Norms

Authors: Yasushi Kawase and Kazuhisa Makino

Published in: LIPIcs, Volume 64, 27th International Symposium on Algorithms and Computation (ISAAC 2016)


Abstract
In this paper, we study the effect of surrogate objective functions in optimization problems. We introduce surrogate ratio as a measure of such effect, where the surrogate ratio is the ratio between the optimal values of the original and surrogate objective functions. We prove that the surrogate ratio is at most mu^{|1/p - 1/q|} when the objective functions are p- and q-norms, and the feasible region is a mu-dimensional space (i.e., a subspace of R^mu), a mu-intersection of matroids, or a mu-extendible system. We also show that this is the best possible bound. In addition, for mu-systems, we demonstrate that the ratio becomes mu^{1/p} when p < q and unbounded if p > q. Here, a mu-system is an independence system such that for any subset of ground set the ratio of the cardinality of the largest to the smallest maximal independent subset of it is at most mu. We further extend our results to the surrogate ratios for approximate solutions.

Cite as

Yasushi Kawase and Kazuhisa Makino. Surrogate Optimization for p-Norms. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 41:1-41:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kawase_et_al:LIPIcs.ISAAC.2016.41,
  author =	{Kawase, Yasushi and Makino, Kazuhisa},
  title =	{{Surrogate Optimization for p-Norms}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{41:1--41:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.41},
  URN =		{urn:nbn:de:0030-drops-68118},
  doi =		{10.4230/LIPIcs.ISAAC.2016.41},
  annote =	{Keywords: surrogate optimization, matroid, extendible system, p-norm}
}
Document
Optimal Composition Ordering Problems for Piecewise Linear Functions

Authors: Yasushi Kawase, Kazuhisa Makino, and Kento Seimi

Published in: LIPIcs, Volume 64, 27th International Symposium on Algorithms and Computation (ISAAC 2016)


Abstract
In this paper, we introduce maximum composition ordering problems. The input is n real functions f_1 , ... , f_n : R to R and a constant c in R. We consider two settings: total and partial compositions. The maximum total composition ordering problem is to compute a permutation sigma : [n] to [n] which maximizes f_{sigma(n)} circ f_{sigma(n-1)} circ ... circ f_{sigma(1)}(c), where [n] = {1, ... , n}. The maximum partial composition ordering problem is to compute a permutation sigma : [n] to [n] and a nonnegative integer k (0 le k le n) which maximize f_{sigma(k)} circ f_{sigma(k-1)} circ ... circ f_{sigma(1)}(c). We propose O(n log n) time algorithms for the maximum total and partial composition ordering problems for monotone linear functions f_i , which generalize linear deterioration and shortening models for the time-dependent scheduling problem. We also show that the maximum partial composition ordering problem can be solved in polynomial time if f i is of the form max{a_i x + b_i , c_i } for some constants a_i (ge 0), b_i and c_i. As a corollary, we show that the two-valued free-order secretary problem can be solved in polynomial time. We finally prove that there exists no constant-factor approximation algorithm for the problems, even if f_i's are monotone, piecewise linear functions with at most two pieces, unless P=NP.

Cite as

Yasushi Kawase, Kazuhisa Makino, and Kento Seimi. Optimal Composition Ordering Problems for Piecewise Linear Functions. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 42:1-42:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kawase_et_al:LIPIcs.ISAAC.2016.42,
  author =	{Kawase, Yasushi and Makino, Kazuhisa and Seimi, Kento},
  title =	{{Optimal Composition Ordering Problems for Piecewise Linear Functions}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{42:1--42:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.42},
  URN =		{urn:nbn:de:0030-drops-68126},
  doi =		{10.4230/LIPIcs.ISAAC.2016.42},
  annote =	{Keywords: function composition, time-dependent scheduling}
}
Document
Additive Approximation Algorithms for Modularity Maximization

Authors: Yasushi Kawase, Tomomi Matsui, and Atsushi Miyauchi

Published in: LIPIcs, Volume 64, 27th International Symposium on Algorithms and Computation (ISAAC 2016)


Abstract
The modularity is a quality function in community detection, which was introduced by Newman and Girvan [Phys. Rev. E, 2004]. Community detection in graphs is now often conducted through modularity maximization: given an undirected graph G = (V, E), we are asked to find a partition C of V that maximizes the modularity. Although numerous algorithms have been developed to date, most of them have no theoretical approximation guarantee. Recently, to overcome this issue, the design of modularity maximization algorithms with provable approximation guarantees has attracted significant attention in the computer science community. In this study, we further investigate the approximability of modularity maximization. More specifically, we propose a polynomial-time (cos(frac{3 - sqrt{5}}{4} pi) - frac{1 - sqrt{5}}{8})-additive approximation algorithm for the modularity maximization problem. Note here that cos(frac{3 - sqrt{5}}{4} pi) - frac{1 - sqrt{5}}{8} < 0.42084 holds. This improves the current best additive approximation error of 0.4672, which was recently provided by Dinh, Li, and Thai (2015). Interestingly, our analysis also demonstrates that the proposed algorithm obtains a nearly-optimal solution for any instance with a high modularity value. Moreover, we propose a polynomial-time 0.16598-additive approximation algorithm for the maximum modularity cut problem. It should be noted that this is the first non-trivial approximability result for the problem. Finally, we demonstrate that our approximation algorithm can be extended to some related problems.

Cite as

Yasushi Kawase, Tomomi Matsui, and Atsushi Miyauchi. Additive Approximation Algorithms for Modularity Maximization. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 43:1-43:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kawase_et_al:LIPIcs.ISAAC.2016.43,
  author =	{Kawase, Yasushi and Matsui, Tomomi and Miyauchi, Atsushi},
  title =	{{Additive Approximation Algorithms for Modularity Maximization}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{43:1--43:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.43},
  URN =		{urn:nbn:de:0030-drops-68136},
  doi =		{10.4230/LIPIcs.ISAAC.2016.43},
  annote =	{Keywords: networks, community detection, modularity maximization, approxima- tion algorithms}
}
Document
The Densest Subgraph Problem with a Convex/Concave Size Function

Authors: Yasushi Kawase and Atsushi Miyauchi

Published in: LIPIcs, Volume 64, 27th International Symposium on Algorithms and Computation (ISAAC 2016)


Abstract
Given an edge-weighted undirected graph G = (V, E, w), the density of S subseteq V is defined as w(S)/|S|, where w(S) is the sum of weights of the edges in the subgraph induced by S. The densest subgraph problem asks for S subseteq V that maximizes the density w(S)/|S|. The problem has received significant attention recently because it can be solved exactly in polynomial time. However, the densest subgraph problem has a drawback; it may happen that the obtained subset is too large or too small in comparison with the desired size of the output. In this study, we address the size issue by generalizing the density of S subseteq V. Specifically, we introduce the f -density of S subseteq V, which is defined as w(S)/f (|S|), where f : Z geq 0 to R geq 0 is a monotonically non-decreasing function. In the f-densest subgraph problem (f-DS), we are asked to find S subseteq V that maximizes the f-density w(S)/f (|S|). Although f-DS does not explicitly specify the size of the output subset of vertices, we can handle the above size issue using a convex size function f or a concave size function f appropriately. For f-DS with convex function f, we propose a nearly-linear-time algorithm with a provable approximation guarantee. In particular, for f-DS with f(x) = x^alpha (alpha in [1, 2]), our algorithm has an approximation ratio of 2 · n^{(alpha-1)(2-alpha)}. On the other hand, for f-DS with concave function f , we propose a linear-programming-based polynomial-time exact algorithm. It should be emphasized that this algorithm obtains not only an optimal solution to the problem but also subsets of vertices corresponding to the extreme points of the upper convex hull of {(|S|, w(S)) | S subseteq V }, which we refer to as the dense frontier points. We also propose a flow-based combinatorial exact algorithm for unweighted graphs that runs in O(n^3) time. Finally, we propose a nearly-linear-time 3-approximation algorithm.

Cite as

Yasushi Kawase and Atsushi Miyauchi. The Densest Subgraph Problem with a Convex/Concave Size Function. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 44:1-44:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kawase_et_al:LIPIcs.ISAAC.2016.44,
  author =	{Kawase, Yasushi and Miyauchi, Atsushi},
  title =	{{The Densest Subgraph Problem with a Convex/Concave Size Function}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{44:1--44:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.44},
  URN =		{urn:nbn:de:0030-drops-68149},
  doi =		{10.4230/LIPIcs.ISAAC.2016.44},
  annote =	{Keywords: graphs, dense subgraph extraction, densest subgraph problem, approxi- mation algorithms}
}
  • Refine by Author
  • 10 Kawase, Yasushi
  • 5 Makino, Kazuhisa
  • 3 Sumita, Hanna
  • 2 Miyauchi, Atsushi
  • 1 Chakraborty, Dibyayan
  • Show More...

  • Refine by Classification
  • 3 Theory of computation → Online algorithms
  • 1 Mathematics of computing → Combinatorial optimization
  • 1 Mathematics of computing → Matroids and greedoids
  • 1 Theory of computation → Algorithmic game theory
  • 1 Theory of computation → Approximation algorithms analysis
  • Show More...

  • Refine by Keyword
  • 2 Competitive analysis
  • 1 A/B testing
  • 1 Algorithms
  • 1 Approximation algorithms
  • 1 DAGs
  • Show More...

  • Refine by Type
  • 13 document

  • Refine by Publication Year
  • 5 2024
  • 4 2016
  • 2 2017
  • 1 2019
  • 1 2022

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail