Volume

LIPIcs, Volume 64

27th International Symposium on Algorithms and Computation (ISAAC 2016)



Thumbnail PDF

Publication Details

  • published at: 2016-12-07
  • Publisher: Schloss-Dagstuhl - Leibniz Zentrum für Informatik
  • ISBN: 978-3-95977-026-2
  • DBLP: db/conf/isaac/isaac2016

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 64, ISAAC'16, Complete Volume

Authors: Seok-Hee Hong


Abstract
LIPIcs, Volume 64, ISAAC'16, Complete Volume

Cite as

Seok-Hee Hong. LIPIcs, Volume 64, ISAAC'16, Complete Volume. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@Proceedings{hong:LIPIcs.ISAAC.2016,
  title =	{{LIPIcs, Volume 64, ISAAC'16, Complete Volume}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016},
  URN =		{urn:nbn:de:0030-drops-69067},
  doi =		{10.4230/LIPIcs.ISAAC.2016},
  annote =	{Keywords: Data Structures, Nonnumerical Algorithms and Problems, Optimization, Discrete Mathematics, Mathematical Software, Algorithms Problem Solving, Control Methods, and Search, Computational Geometry and Object Modeling}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Program Committee, External Reviewers

Authors: Seok-Hee Hong


Abstract
Front Matter, Table of Contents, Preface, Program Committee, External Reviewers

Cite as

Seok-Hee Hong. Front Matter, Table of Contents, Preface, Program Committee, External Reviewers. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 0:i-0:xiv, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{hong:LIPIcs.ISAAC.2016.0,
  author =	{Hong, Seok-Hee},
  title =	{{Front Matter, Table of Contents, Preface, Program Committee, External Reviewers}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{0:i--0:xiv},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.0},
  URN =		{urn:nbn:de:0030-drops-67728},
  doi =		{10.4230/LIPIcs.ISAAC.2016.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Program Committee, External Reviewers}
}
Document
Invited Talk
Towards Processing of Big Graphs: from Theory, Algorithm to System (Invited Talk)

Authors: Xuemin Lin


Abstract
Graphs are very important parts of Big Data and widely used for modelling complex structured data with a broad spectrum of applications such as bioinformatics, web search, social network, road network, etc. Over the last decade, tremendous research efforts have been devoted to many fundamental problems in managing and analysing graph data. In this talk, I will present some of our recent research efforts in processing big graphs including scalable processing theory and techniques, distributed computation, and system framework.

Cite as

Xuemin Lin. Towards Processing of Big Graphs: from Theory, Algorithm to System (Invited Talk). In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, p. 1:1, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{lin:LIPIcs.ISAAC.2016.1,
  author =	{Lin, Xuemin},
  title =	{{Towards Processing of Big Graphs: from Theory, Algorithm to System}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{1:1--1:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.1},
  URN =		{urn:nbn:de:0030-drops-68346},
  doi =		{10.4230/LIPIcs.ISAAC.2016.1},
  annote =	{Keywords: Graph Processing, Big Data, Cloud Computing}
}
Document
Invited Talk
Compressed and Searchable Indexes for Highly Similar Strings (Invited Talk)

Authors: Kunsoo Park


Abstract
The collection indexing problem is defined as follows: Given a collection of highly similar strings, build a compressed index for the collection of strings, and when a pattern is given, find all occurrences of the pattern in the given strings. Since the index is compressed, we also need a separate operation which retrieves a specified substring of one of the given strings. Such a collection of highly similar strings can be found in genome sequences of a species and in documents stored in a version control system. Many indexes for the collection indexing problem have been developed, most of which use classical compression schemes such as run-length encoding and Lempel-Ziv compressions to exploit the similarity of the given strings. We introduce a new index for highly similar strings, called FM index of alignment. We start by finding common regions and non-common regions of highly similar strings. We need not find a multiple alignment of non-common regions. Finding common and non-common regions is much easier and simpler than finding a multiple alignment. Then we make a transformed alignment of the given strings, where gaps in a non-common region are put together into one gap. We define a suffix array of alignment on the transformed alignment, and the FM index of alignment is an FM index of this suffix array of alignment. The FM index of alignment supports the LF mapping and backward search, the key functionalities of the FM index. The FM index of alignment takes less space than other indexes and its pattern search is also fast.

Cite as

Kunsoo Park. Compressed and Searchable Indexes for Highly Similar Strings (Invited Talk). In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, p. 2:1, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{park:LIPIcs.ISAAC.2016.2,
  author =	{Park, Kunsoo},
  title =	{{Compressed and Searchable Indexes for Highly Similar Strings}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{2:1--2:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.2},
  URN =		{urn:nbn:de:0030-drops-68359},
  doi =		{10.4230/LIPIcs.ISAAC.2016.2},
  annote =	{Keywords: Index for similar strings, FM index, Suffix array, Alignment}
}
Document
Streaming Verification of Graph Properties

Authors: Amirali Abdullah, Samira Daruki, Chitradeep Dutta Roy, and Suresh Venkatasubramanian


Abstract
Streaming interactive proofs (SIPs) are a framework for outsourced computation. A computationally limited streaming client (the verifier) hands over a large data set to an untrusted server (the prover) in the cloud and the two parties run a protocol to confirm the correctness of result with high probability. SIPs are particularly interesting for problems that are hard to solve (or even approximate) well in a streaming setting. The most notable of these problems is finding maximum matchings, which has received intense interest in recent years but has strong lower bounds even for constant factor approximations. In this paper, we present efficient streaming interactive proofs that can verify maximum matchings exactly. Our results cover all flavors of matchings (bipartite/non-bipartite and weighted). In addition, we also present streaming verifiers for approximate metric TSP. In particular, these are the first efficient results for weighted matchings and for metric TSP in any streaming verification model.

Cite as

Amirali Abdullah, Samira Daruki, Chitradeep Dutta Roy, and Suresh Venkatasubramanian. Streaming Verification of Graph Properties. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 3:1-3:14, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{abdullah_et_al:LIPIcs.ISAAC.2016.3,
  author =	{Abdullah, Amirali and Daruki, Samira and Roy, Chitradeep Dutta and Venkatasubramanian, Suresh},
  title =	{{Streaming Verification of Graph Properties}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{3:1--3:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.3},
  URN =		{urn:nbn:de:0030-drops-67730},
  doi =		{10.4230/LIPIcs.ISAAC.2016.3},
  annote =	{Keywords: streaming interactive proofs, verification, matching, travelling salesman problem, graph algorithms}
}
Document
Building Clusters with Lower-Bounded Sizes

Authors: Faisal Abu-Khzam, Cristina Bazgan, Katrin Casel, and Henning Fernau


Abstract
Classical clustering problems search for a partition of objects into a fixed number of clusters. In many scenarios however the number of clusters is not known or necessarily fixed. Further, clusters are sometimes only considered to be of significance if they have a certain size. We discuss clustering into sets of minimum cardinality k without a fixed number of sets and present a general model for these types of problems. This general framework allows the comparison of different measures to assess the quality of a clustering. We specifically consider nine quality-measures and classify the complexity of the resulting problems with respect to k. Further, we derive some polynomial-time solvable cases for k = 2 with connections to matching-type problems which, among other graph problems, then are used to compute approximations for larger values of k.

Cite as

Faisal Abu-Khzam, Cristina Bazgan, Katrin Casel, and Henning Fernau. Building Clusters with Lower-Bounded Sizes. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 4:1-4:13, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{abukhzam_et_al:LIPIcs.ISAAC.2016.4,
  author =	{Abu-Khzam, Faisal and Bazgan, Cristina and Casel, Katrin and Fernau, Henning},
  title =	{{Building Clusters with Lower-Bounded Sizes}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{4:1--4:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.4},
  URN =		{urn:nbn:de:0030-drops-67742},
  doi =		{10.4230/LIPIcs.ISAAC.2016.4},
  annote =	{Keywords: Clustering, Approximation Algorithms, Complexity, Matching}
}
Document
Simultaneous Feedback Edge Set: A Parameterized Perspective

Authors: Akanksha Agrawal, Fahad Panolan, Saket Saurabh, and Meirav Zehavi


Abstract
In a recent article Agrawal et al. (STACS 2016) studied a simultaneous variant of the classic Feedback Vertex Set problem, called Simultaneous Feedback Vertex Set (Sim-FVS). In this problem the input is an n-vertex graph G, an integer k and a coloring function col : E(G) -> 2^[alpha] , and the objective is to check whether there exists a vertex subset S of cardinality at most k in G such that for all i in [alpha], G_i - S is acyclic. Here, G_i = (V (G), {e in E(G) | i in col(e)}) and [alpha] = {1,...,alpha}. In this paper we consider the edge variant of the problem, namely, Simultaneous Feedback Edge Set (Sim-FES). In this problem, the input is same as the input of Sim-FVS and the objective is to check whether there is an edge subset S of cardinality at most k in G such that for all i in [alpha], G_i - S is acyclic. Unlike the vertex variant of the problem, when alpha = 1, the problem is equivalent to finding a maximal spanning forest and hence it is polynomial time solvable. We show that for alpha = 3 Sim-FES is NP-hard by giving a reduction from Vertex Cover on cubic-graphs. The same reduction shows that the problem does not admit an algorithm of running time O(2^o(k) n^O(1)) unless ETH fails. This hardness result is complimented by an FPT algorithm for Sim-FES running in time O(2^((omega k alpha) + (alpha log k)) n^O(1)), where omega is the exponent in the running time of matrix multiplication. The same algorithm gives a polynomial time algorithm for the case when alpha = 2. We also give a kernel for Sim-FES with (k alpha)^O(alpha) vertices. Finally, we consider the problem Maximum Simultaneous Acyclic Subgraph. Here, the input is a graph G, an integer q and, a coloring function col : E(G) -> 2^[alpha] . The question is whether there is a edge subset F of cardinality at least q in G such that for all i in [alpha], G[F_i] is acyclic. Here, F_i = {e in F | i in col(e)}. We give an FPT algorithm for Maximum Simultaneous Acyclic Subgraph running in time O(2^(omega q alpha) n^O(1) ). All our algorithms are based on parameterized version of the Matroid Parity problem.

Cite as

Akanksha Agrawal, Fahad Panolan, Saket Saurabh, and Meirav Zehavi. Simultaneous Feedback Edge Set: A Parameterized Perspective. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 5:1-5:13, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{agrawal_et_al:LIPIcs.ISAAC.2016.5,
  author =	{Agrawal, Akanksha and Panolan, Fahad and Saurabh, Saket and Zehavi, Meirav},
  title =	{{Simultaneous Feedback Edge Set: A Parameterized Perspective}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{5:1--5:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.5},
  URN =		{urn:nbn:de:0030-drops-67767},
  doi =		{10.4230/LIPIcs.ISAAC.2016.5},
  annote =	{Keywords: parameterized complexity, feedback edge set, alpha-matroid parity}
}
Document
Kernels for Deletion to Classes of Acyclic Digraphs

Authors: Akanksha Agrawal, Saket Saurabh, Roohani Sharma, and Meirav Zehavi


Abstract
In the Directed Feedback Vertex Set (DFVS) problem, we are given a digraph D on n vertices and a positive integer k and the objective is to check whether there exists a set of vertices S of size at most k such that F = D - S is a directed acyclic digraph. In a recent paper, Mnich and van Leeuwen [STACS 2016] considered the kernelization complexity of DFVS with an additional restriction on F, namely that F must be an out-forest (Out-Forest Vertex Deletion Set), an out-tree (Out-Tree Vertex Deletion Set), or a (directed) pumpkin (Pumpkin Vertex Deletion Set). Their objective was to shed some light on the kernelization complexity of the DFVS problem, a well known open problem in the area of Parameterized Complexity. In this article, we improve the kernel sizes of Out-Forest Vertex Deletion Set from O(k^3) to O(k^2) and of Pumpkin Vertex Deletion Set from O(k^18) to O(k^3). We also prove that the former kernel size is tight under certain complexity theoretic assumptions.

Cite as

Akanksha Agrawal, Saket Saurabh, Roohani Sharma, and Meirav Zehavi. Kernels for Deletion to Classes of Acyclic Digraphs. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 6:1-6:12, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{agrawal_et_al:LIPIcs.ISAAC.2016.6,
  author =	{Agrawal, Akanksha and Saurabh, Saket and Sharma, Roohani and Zehavi, Meirav},
  title =	{{Kernels for Deletion to Classes of Acyclic Digraphs}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{6:1--6:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.6},
  URN =		{urn:nbn:de:0030-drops-67777},
  doi =		{10.4230/LIPIcs.ISAAC.2016.6},
  annote =	{Keywords: out-forest, pumpkin, parameterized complexity, kernelization}
}
Document
An Efficient Algorithm for Placing Electric Vehicle Charging Stations

Authors: Pankaj K. Agarwal, Jiangwei Pan, and Will Victor


Abstract
Motivated by the increasing popularity of electric vehicles (EV) and a lack of charging stations in the road network, we study the shortest path hitting set (SPHS) problem. Roughly speaking, given an input graph G, the goal is to compute a small-size subset H of vertices of G such that by placing charging stations at vertices in H, every shortest path in G becomes EV-feasible, i.e., an EV can travel between any two vertices of G through the shortest path with a full charge. In this paper, we propose a bi-criteria approximation algorithm with running time near-linear in the size of G that has a logarithmic approximation on |H| and may require the EV to slightly deviate from the shortest path. We also present a data structure for computing an EV-feasible path between two query vertices of G.

Cite as

Pankaj K. Agarwal, Jiangwei Pan, and Will Victor. An Efficient Algorithm for Placing Electric Vehicle Charging Stations. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 7:1-7:12, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{agarwal_et_al:LIPIcs.ISAAC.2016.7,
  author =	{Agarwal, Pankaj K. and Pan, Jiangwei and Victor, Will},
  title =	{{An Efficient Algorithm for Placing Electric Vehicle Charging Stations}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{7:1--7:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.7},
  URN =		{urn:nbn:de:0030-drops-67782},
  doi =		{10.4230/LIPIcs.ISAAC.2016.7},
  annote =	{Keywords: Shortest path hitting set, Charging station placement, Electric vehicle}
}
Document
Finding k Simple Shortest Paths and Cycles

Authors: Udit Agarwal and Vijaya Ramachandran


Abstract
We present algorithms and techniques for several problems related to finding multiple simple shortest paths and cycles in a graph. Our main result is a new algorithm for finding k simple shortest paths for all pairs of vertices in a weighted directed graph G = (V, E). For k = 2 our algorithm runs in O(mn + n^2 log n) time where m and n are the number of edges and vertices in G. For k = 3 our algorithm runs in O(mn^2 + n^3 log n) time, which is almost a factor of n faster than the best previous algorithm. Our approach is based on forming suitable path extensions to find simple shortest paths; this method is different from the 'detour finding' technique used in most of the prior work on simple shortest paths, replacement paths, and distance sensitivity oracles. We present new algorithms for generating simple cycles and simple paths in G in non-decreasing order of their weight. The algorithm for generating simple paths is much faster,and uses another variant of path extensions.

Cite as

Udit Agarwal and Vijaya Ramachandran. Finding k Simple Shortest Paths and Cycles. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 8:1-8:12, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{agarwal_et_al:LIPIcs.ISAAC.2016.8,
  author =	{Agarwal, Udit and Ramachandran, Vijaya},
  title =	{{Finding k Simple Shortest Paths and Cycles}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{8:1--8:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.8},
  URN =		{urn:nbn:de:0030-drops-67830},
  doi =		{10.4230/LIPIcs.ISAAC.2016.8},
  annote =	{Keywords: Graph Algorithms, Shortest Paths, k Simple Shortest Paths, Enumerat- ing Simple Cycles, Enumerating Simple Paths}
}
Document
Packing Short Plane Spanning Trees in Complete Geometric Graphs

Authors: Oswin Aichholzer, Thomas Hackl, Matias Korman, Alexander Pilz, Günter Rote, André van Renssen, Marcel Roeloffzen, and Birgit Vogtenhuber


Abstract
Given a set of points in the plane, we want to establish a connection network between these points that consists of several disjoint layers. Motivated by sensor networks, we want that each layer is spanning and plane, and that no edge is very long (when compared to the minimum length needed to obtain a spanning graph). We consider two different approaches: first we show an almost optimal centralized approach to extract two trees. Then we show a constant factor approximation for a distributed model in which each point can compute its adjacencies using only local information. This second approach may create cycles, but maintains planarity.

Cite as

Oswin Aichholzer, Thomas Hackl, Matias Korman, Alexander Pilz, Günter Rote, André van Renssen, Marcel Roeloffzen, and Birgit Vogtenhuber. Packing Short Plane Spanning Trees in Complete Geometric Graphs. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 9:1-9:12, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{aichholzer_et_al:LIPIcs.ISAAC.2016.9,
  author =	{Aichholzer, Oswin and Hackl, Thomas and Korman, Matias and Pilz, Alexander and Rote, G\"{u}nter and van Renssen, Andr\'{e} and Roeloffzen, Marcel and Vogtenhuber, Birgit},
  title =	{{Packing Short Plane Spanning Trees in Complete Geometric Graphs}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{9:1--9:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.9},
  URN =		{urn:nbn:de:0030-drops-67823},
  doi =		{10.4230/LIPIcs.ISAAC.2016.9},
  annote =	{Keywords: Geometric Graphs, Graph Packing, Plane Graphs, Minimum Spanning Tree, Bottleneck Edge}
}
Document
Reconstruction of Weakly Simple Polygons from their Edges

Authors: Hugo A. Akitaya and Csaba D. Tóth


Abstract
Given n line segments in the plane, do they form the edge set of a weakly simple polygon; that is, can the segment endpoints be perturbed by at most epsilon, for any epsilon > 0, to obtain a simple polygon? While the analogous question for simple polygons can easily be answered in O(n log n) time, we show that it is NP-complete for weakly simple polygons. We give O(n)-time algorithms in two special cases: when all segments are collinear, or the segment endpoints are in general position. These results extend to the variant in which the segments are directed, and the counterclockwise traversal of a polygon should follow the orientation. We study related problems for the case that the union of the n input segments is connected. (i) If each segment can be subdivided into several segments, find the minimum number of subdivision points to form a weakly simple polygon. (ii) If new line segments can be added, find the minimum total length of new segments that creates a weakly simple polygon. We give worst-case upper and lower bounds for both problems.

Cite as

Hugo A. Akitaya and Csaba D. Tóth. Reconstruction of Weakly Simple Polygons from their Edges. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 10:1-10:13, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{akitaya_et_al:LIPIcs.ISAAC.2016.10,
  author =	{Akitaya, Hugo A. and T\'{o}th, Csaba D.},
  title =	{{Reconstruction of Weakly Simple Polygons from their Edges}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{10:1--10:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.10},
  URN =		{urn:nbn:de:0030-drops-67795},
  doi =		{10.4230/LIPIcs.ISAAC.2016.10},
  annote =	{Keywords: simple polygon, line segment, geometric graph}
}
Document
Approximating Smallest Containers for Packing Three-Dimensional Convex Objects

Authors: Helmut Alt and Nadja Scharf


Abstract
We investigate the problem of computing a minimum-volume container for the non-overlapping packing of a given set of three-dimensional convex objects. Already the simplest versions of the problem are NP-hard so that we cannot expect to find exact polynomial time algorithms. We give constant ratio approximation algorithms for packing axis-parallel (rectangular) cuboids under translation into an axis-parallel (rectangular) cuboid as container, for packing cuboids under rigid motions into an axis-parallel cuboid or into an arbitrary convex container, and for packing convex polyhedra under rigid motions into an axis-parallel cuboid or arbitrary convex container. This work gives the first approximability results for the computation of minimum volume containers for the objects described.

Cite as

Helmut Alt and Nadja Scharf. Approximating Smallest Containers for Packing Three-Dimensional Convex Objects. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 11:1-11:14, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{alt_et_al:LIPIcs.ISAAC.2016.11,
  author =	{Alt, Helmut and Scharf, Nadja},
  title =	{{Approximating Smallest Containers for Packing Three-Dimensional Convex Objects}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{11:1--11:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.11},
  URN =		{urn:nbn:de:0030-drops-67801},
  doi =		{10.4230/LIPIcs.ISAAC.2016.11},
  annote =	{Keywords: computational geometry, packing, approximation algorithm}
}
Document
Mind the Gap: Essentially Optimal Algorithms for Online Dictionary Matching with One Gap

Authors: Amihood Amir, Tsvi Kopelowitz, Avivit Levy, Seth Pettie, Ely Porat, and B. Riva Shalom


Abstract
We examine the complexity of the online Dictionary Matching with One Gap Problem (DMOG) which is the following. Preprocess a dictionary D of d patterns, where each pattern contains a special gap symbol that can match any string, so that given a text that arrives online, a character at a time, we can report all of the patterns from D that are suffixes of the text that has arrived so far, before the next character arrives. In more general versions the gap symbols are associated with bounds determining the possible lengths of matching strings. Online DMOG captures the difficulty in a bottleneck procedure for cyber-security, as many digital signatures of viruses manifest themselves as patterns with a single gap. In this paper, we demonstrate that the difficulty in obtaining efficient solutions for the DMOG problem, even in the offline setting, can be traced back to the infamous 3SUM conjecture. We show a conditional lower bound of Omega(delta(G_D)+op) time per text character, where G_D is a bipartite graph that captures the structure of D, delta(G_D) is the degeneracy of this graph, and op is the output size. Moreover, we show a conditional lower bound in terms of the magnitude of gaps for the bounded case, thereby showing that some known offline upper bounds are essentially optimal. We also provide matching upper-bounds (up to sub-polynomial factors), in terms of the degeneracy, for the online DMOG problem. In particular, we introduce algorithms whose time cost depends linearly on delta(G_D). Our algorithms make use of graph orientations, together with some additional techniques. These algorithms are of practical interest since although delta(G_D) can be as large as sqrt(d), and even larger if G_D is a multi-graph, it is typically a very small constant in practice. Finally, when delta(G_D) is large we are able to obtain even more efficient solutions.

Cite as

Amihood Amir, Tsvi Kopelowitz, Avivit Levy, Seth Pettie, Ely Porat, and B. Riva Shalom. Mind the Gap: Essentially Optimal Algorithms for Online Dictionary Matching with One Gap. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 12:1-12:12, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{amir_et_al:LIPIcs.ISAAC.2016.12,
  author =	{Amir, Amihood and Kopelowitz, Tsvi and Levy, Avivit and Pettie, Seth and Porat, Ely and Shalom, B. Riva},
  title =	{{Mind the Gap: Essentially Optimal Algorithms for Online Dictionary Matching with One Gap}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{12:1--12:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.12},
  URN =		{urn:nbn:de:0030-drops-67841},
  doi =		{10.4230/LIPIcs.ISAAC.2016.12},
  annote =	{Keywords: Pattern matching, Dictionary matching, 3SUM, Triangle reporting}
}
Document
Clustered Planarity with Pipes

Authors: Patrizio Angelini and Giordano Da Lozzo


Abstract
We study the version of the C-Planarity problem in which edges connecting the same pair of clusters must be grouped into pipes, which generalizes the Strip Planarity problem. We give algorithms to decide several families of instances for the two variants in which the order of the pipes around each cluster is given as part of the input or can be chosen by the algorithm.

Cite as

Patrizio Angelini and Giordano Da Lozzo. Clustered Planarity with Pipes. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 13:1-13:13, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{angelini_et_al:LIPIcs.ISAAC.2016.13,
  author =	{Angelini, Patrizio and Da Lozzo, Giordano},
  title =	{{Clustered Planarity with Pipes}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{13:1--13:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.13},
  URN =		{urn:nbn:de:0030-drops-67817},
  doi =		{10.4230/LIPIcs.ISAAC.2016.13},
  annote =	{Keywords: Clustered Planarity, FPT, SEFE, Graph Drawing}
}
Document
L_1 Geodesic Farthest Neighbors in a Simple Polygon and Related Problems

Authors: Sang Won Bae


Abstract
In this paper, we investigate the L_1 geodesic farthest neighbors in a simple polygon P, and address several fundamental problems related to farthest neighbors. Given a subset S subseteq P, an L_1 geodesic farthest neighbor of p in P from S is one that maximizes the length of L_1 shortest path from p in P. Our list of problems include: computing the diameter, radius, center, farthest-neighbor Voronoi diagram, and two-center of S under the L_1 geodesic distance. We show that all these problems can be solved in linear or near-linear time based on our new observations on farthest neighbors and extreme points. Among them, the key observation shows that there are at most four extreme points of any compact subset S subseteq P with respect to the L_1 geodesic distance after removing redundancy.

Cite as

Sang Won Bae. L_1 Geodesic Farthest Neighbors in a Simple Polygon and Related Problems. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 14:1-14:12, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bae:LIPIcs.ISAAC.2016.14,
  author =	{Bae, Sang Won},
  title =	{{L\underline1 Geodesic Farthest Neighbors in a Simple Polygon and Related Problems}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{14:1--14:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.14},
  URN =		{urn:nbn:de:0030-drops-67858},
  doi =		{10.4230/LIPIcs.ISAAC.2016.14},
  annote =	{Keywords: simple polygon, L\underline1 geodesic distance, farthest neighbor, farthest-neighbor Voronoi diagram, k-center}
}
Document
Approximate Clustering via Metric Partitioning

Authors: Sayan Bandyapadhyay and Kasturi Varadarajan


Abstract
In this paper we consider two metric covering/clustering problems - Minimum Cost Covering Problem (MCC) and k-clustering. In the MCC problem, we are given two point sets X (clients) and Y (servers), and a metric on X cup Y. We would like to cover the clients by balls centered at the servers. The objective function to minimize is the sum of the alpha-th power of the radii of the balls. Here alpha geq 1 is a parameter of the problem (but not of a problem instance). MCC is closely related to the k-clustering problem. The main difference between k-clustering and MCC is that in k-clustering one needs to select k balls to cover the clients. For any eps > 0, we describe quasi-polynomial time (1 + eps) approximation algorithms for both of the problems. However, in case of k-clustering the algorithm uses (1 + eps)k balls. Prior to our work, a 3^alpha and a c^alpha approximation were achieved by polynomial-time algorithms for MCC and k-clustering, respectively, where c > 1 is an absolute constant. These two problems are thus interesting examples of metric covering/clustering problems that admit (1 + eps)-approximation (using (1 + eps)k balls in case of k-clustering), if one is willing to settle for quasi-polynomial time. In contrast, for the variant of MCC where alpha is part of the input, we show under standard assumptions that no polynomial time algorithm can achieve an approximation factor better than O(log |X|) for alpha geq log |X|.

Cite as

Sayan Bandyapadhyay and Kasturi Varadarajan. Approximate Clustering via Metric Partitioning. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 15:1-15:13, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bandyapadhyay_et_al:LIPIcs.ISAAC.2016.15,
  author =	{Bandyapadhyay, Sayan and Varadarajan, Kasturi},
  title =	{{Approximate Clustering via Metric Partitioning}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{15:1--15:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.15},
  URN =		{urn:nbn:de:0030-drops-67751},
  doi =		{10.4230/LIPIcs.ISAAC.2016.15},
  annote =	{Keywords: Approximation Algorithms, Clustering, Covering, Probabilistic Parti- tions}
}
Document
Hard Communication Channels for Steganography

Authors: Sebastian Berndt and Maciej Liskiewicz


Abstract
This paper considers steganography - the concept of hiding the presence of secret messages in legal communications - in the computational setting and its relation to cryptography. Very recently the first (non-polynomial time) steganographic protocol has been shown which, for any communication channel, is provably secure, reliable, and has nearly optimal bandwidth. The security is unconditional, i.e. it does not rely on any unproven complexity-theoretic assumption. This disproves the claim that the existence of one-way functions and access to a communication channel oracle are both necessary and sufficient conditions for the existence of secure steganography in the sense that secure and reliable steganography exists independently of the existence of one-way functions. In this paper, we prove that this equivalence also does not hold in the more realistic setting, where the stegosystem is polynomial time bounded. We prove this by constructing (a) a channel for which secure steganography exists if and only if one-way functions exist and (b) another channel such that secure steganography implies that no one-way functions exist. We therefore show that security-preserving reductions between cryptography and steganography need to be treated very carefully.

Cite as

Sebastian Berndt and Maciej Liskiewicz. Hard Communication Channels for Steganography. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 16:1-16:13, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{berndt_et_al:LIPIcs.ISAAC.2016.16,
  author =	{Berndt, Sebastian and Liskiewicz, Maciej},
  title =	{{Hard Communication Channels for Steganography}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{16:1--16:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.16},
  URN =		{urn:nbn:de:0030-drops-67863},
  doi =		{10.4230/LIPIcs.ISAAC.2016.16},
  annote =	{Keywords: provable secure steganography, cryptographic assumptions, pseudoran- dom functions, one-way functions, signature schemes}
}
Document
On r-Guarding Thin Orthogonal Polygons

Authors: Therese Biedl and Saeed Mehrabi


Abstract
Guarding a polygon with few guards is an old and well-studied problem in computational geometry. Here we consider the following variant: We assume that the polygon is orthogonal and thin in some sense, and we consider a point p to guard a point q if and only if the minimum axis-aligned rectangle spanned by p and q is inside the polygon. A simple proof shows that this problem is NP-hard on orthogonal polygons with holes, even if the polygon is thin. If there are no holes, then a thin polygon becomes a tree polygon in the sense that the so-called dual graph of the polygon is a tree. It was known that finding the minimum set of r-guards is polynomial for tree polygons (and in fact for all orthogonal polygons), but the run-time was ~O(n^17). We show here that with a different approach one can find the minimum set of r-guards can be found in tree polygons in linear time, answering a question posed by Biedl et al. (SoCG 2011). Furthermore, the approach is much more general, allowing to specify subsets of points to guard and guards to use, and it generalizes to polygons with h holes or thickness K, becoming fixed-parameter tractable in h + K.

Cite as

Therese Biedl and Saeed Mehrabi. On r-Guarding Thin Orthogonal Polygons. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 17:1-17:13, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{biedl_et_al:LIPIcs.ISAAC.2016.17,
  author =	{Biedl, Therese and Mehrabi, Saeed},
  title =	{{On r-Guarding Thin Orthogonal Polygons}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{17:1--17:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.17},
  URN =		{urn:nbn:de:0030-drops-67913},
  doi =		{10.4230/LIPIcs.ISAAC.2016.17},
  annote =	{Keywords: Art Gallery Problem, Orthogonal Polygons, r-Guarding, Treewidth, Fixed-parameter Tractable}
}
Document
Dynamic Relative Compression, Dynamic Partial Sums, and Substring Concatenation

Authors: Philip Bille, Patrick Hagge Cording, Inge Li Gørtz, Frederik Rye Skjoldjensen, Hjalte Wedel Vildhøj, and Søren Vind


Abstract
Given a static reference string R and a source string S, a relative compression of S with respect to R is an encoding of S as a sequence of references to substrings of R. Relative compression schemes are a classic model of compression and have recently proved very successful for compressing highly-repetitive massive data sets such as genomes and web-data. We initiate the study of relative compression in a dynamic setting where the compressed source string S is subject to edit operations. The goal is to maintain the compressed representation compactly, while supporting edits and allowing efficient random access to the (uncompressed) source string. We present new data structures that achieve optimal time for updates and queries while using space linear in the size of the optimal relative compression, for nearly all combinations of parameters. We also present solutions for restricted and extended sets of updates. To achieve these results, we revisit the dynamic partial sums problem and the substring concatenation problem. We present new optimal or near optimal bounds for these problems. Plugging in our new results we also immediately obtain new bounds for the string indexing for patterns with wildcards problem and the dynamic text and static pattern matching problem.

Cite as

Philip Bille, Patrick Hagge Cording, Inge Li Gørtz, Frederik Rye Skjoldjensen, Hjalte Wedel Vildhøj, and Søren Vind. Dynamic Relative Compression, Dynamic Partial Sums, and Substring Concatenation. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 18:1-18:13, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bille_et_al:LIPIcs.ISAAC.2016.18,
  author =	{Bille, Philip and Cording, Patrick Hagge and G{\o}rtz, Inge Li and Skjoldjensen, Frederik Rye and Vildh{\o}j, Hjalte Wedel and Vind, S{\o}ren},
  title =	{{Dynamic Relative Compression, Dynamic Partial Sums, and Substring Concatenation}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{18:1--18:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.18},
  URN =		{urn:nbn:de:0030-drops-67872},
  doi =		{10.4230/LIPIcs.ISAAC.2016.18},
  annote =	{Keywords: Relative compression, dynamic compression, dynamic partial sum, sub-string concatenation, external macro compression}
}
Document
Towards Plane Spanners of Degree 3

Authors: Ahmad Biniaz, Prosenjit Bose, Jean-Lou De Carufel, Cyril Gavoille, Anil Maheshwari, and Michiel Smid


Abstract
Let S be a finite set of points in the plane that are in convex position. We present an algorithm that constructs a plane frac{3+4 pi}{3}-spanner of S whose vertex degree is at most 3. Let Lambda be the vertex set of a finite non-uniform rectangular lattice in the plane. We present an algorithm that constructs a plane 3 sqrt{2}-spanner for Lambda whose vertex degree is at most 3. For points that are in the plane and in general position, we show how to compute plane degree-3 spanners with a linear number of Steiner points.

Cite as

Ahmad Biniaz, Prosenjit Bose, Jean-Lou De Carufel, Cyril Gavoille, Anil Maheshwari, and Michiel Smid. Towards Plane Spanners of Degree 3. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 19:1-19:14, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{biniaz_et_al:LIPIcs.ISAAC.2016.19,
  author =	{Biniaz, Ahmad and Bose, Prosenjit and De Carufel, Jean-Lou and Gavoille, Cyril and Maheshwari, Anil and Smid, Michiel},
  title =	{{Towards Plane Spanners of Degree 3}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{19:1--19:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.19},
  URN =		{urn:nbn:de:0030-drops-67887},
  doi =		{10.4230/LIPIcs.ISAAC.2016.19},
  annote =	{Keywords: plane spanners, degree-3 spanners, convex position, non-uniform lattice}
}
Document
Degree-Constrained Orientation of Maximum Satisfaction: Graph Classes and Parameterized Complexity

Authors: Hans L. Bodlaender, Hirotaka Ono, and Yota Otachi


Abstract
The problem Max W-Light (Max W-Heavy) for an undirected graph is to assign a direction to each edge so that the number of vertices of outdegree at most W (resp. at least W) is maximized. It is known that these problems are NP-hard even for fixed W. For example, Max 0-Light is equivalent to the problem of finding a maximum independent set. In this paper, we show that for any fixed constant W, Max W-Heavy can be solved in linear time for hereditary graph classes for which treewidth is bounded by a function of degeneracy. We show that such graph classes include chordal graphs, circular-arc graphs, d-trapezoid graphs, chordal bipartite graphs, and graphs of bounded clique-width. To have a polynomial-time algorithm for Max W-Light, we need an additional condition of a polynomial upper bound on the number of potential maximal cliques to apply the metatheorem by Fomin, Todinca, and Villanger [SIAM J. Comput., 44(1):57-87, 2015]. The aforementioned graph classes, except bounded clique-width graphs, satisfy such a condition. For graphs of bounded clique-width, we present a dynamic programming approach not using the metatheorem to show that it is actually polynomial-time solvable for this graph class too. We also study the parameterized complexity of the problems and show some tractability and intractability results.

Cite as

Hans L. Bodlaender, Hirotaka Ono, and Yota Otachi. Degree-Constrained Orientation of Maximum Satisfaction: Graph Classes and Parameterized Complexity. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 20:1-20:12, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bodlaender_et_al:LIPIcs.ISAAC.2016.20,
  author =	{Bodlaender, Hans L. and Ono, Hirotaka and Otachi, Yota},
  title =	{{Degree-Constrained Orientation of Maximum Satisfaction: Graph Classes and Parameterized Complexity}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{20:1--20:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.20},
  URN =		{urn:nbn:de:0030-drops-67898},
  doi =		{10.4230/LIPIcs.ISAAC.2016.20},
  annote =	{Keywords: orientation, graph class, width parameter, parameterized complexity}
}
Document
Online Packet Scheduling with Bounded Delay and Lookahead

Authors: Martin Böhm, Marek Chrobak, Lukasz Jez, Fei Li, Jirí Sgall, and Pavel Veselý


Abstract
We study the online bounded-delay packet scheduling problem (PacketScheduling), where packets of unit size arrive at a router over time and need to be transmitted over a network link. Each packet has two attributes: a non-negative weight and a deadline for its transmission. The objective is to maximize the total weight of the transmitted packets. This problem has been well studied in the literature, yet its optimal competitive ratio remains unknown: the best upper bound is 1.828 [Englert and Westermann, SODA 2007], still quite far from the best lower bound of phi approx 1.618 [Hajek, CISS 2001; Andelman et al, SODA 2003; Chin and Fung, Algorithmica, 2003]. In the variant of PacketScheduling with s-bounded instances, each packet can be scheduled in at most s consecutive slots, starting at its release time. The lower bound of phi applies even to the special case of 2-bounded instances, and a phi-competitive algorithm for 3-bounded instances was given in [Chin et al, JDA, 2006]. Improving that result, and addressing a question posed by Goldwasser [SIGACT News, 2010], we present a phi-competitive algorithm for 4-bounded instances. We also study a variant of PacketScheduling where an online algorithm has the additional power of 1-lookahead, knowing at time t which packets will arrive at time t+1. For PacketScheduling with 1-lookahead restricted to 2-bounded instances, we present an online algorithm with competitive ratio frac{1}{2}(sqrt{13} - 1) approx 1.303 and we prove a nearly tight lower bound of frac{1}{4}(1 + sqrt{17}) approx 1.281.

Cite as

Martin Böhm, Marek Chrobak, Lukasz Jez, Fei Li, Jirí Sgall, and Pavel Veselý. Online Packet Scheduling with Bounded Delay and Lookahead. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 21:1-21:13, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bohm_et_al:LIPIcs.ISAAC.2016.21,
  author =	{B\"{o}hm, Martin and Chrobak, Marek and Jez, Lukasz and Li, Fei and Sgall, Jir{\'\i} and Vesel\'{y}, Pavel},
  title =	{{Online Packet Scheduling with Bounded Delay and Lookahead}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{21:1--21:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.21},
  URN =		{urn:nbn:de:0030-drops-67901},
  doi =		{10.4230/LIPIcs.ISAAC.2016.21},
  annote =	{Keywords: buffer management, online scheduling, online algorithm, lookahead}
}
Document
Biconnectivity, Chain Decomposition and st-Numbering Using O(n) Bits

Authors: Sankardeep Chakraborty, Venkatesh Raman, and Srinivasa Rao Satti


Abstract
Recent work by Elmasry et al. (STACS 2015) and Asano et al. (ISAAC 2014) reconsidered classical fundamental graph algorithms focusing on improving the space complexity. Elmasry et al. gave, among others, an implementation of depth first search (DFS) of a graph on n vertices and m edges, taking O(m lg lg n) time using O(n) bits of space improving on the time bound of O(m lg n) due to Asano et al. Subsequently Banerjee et al. (COCOON 2016) gave an O(m + n) time implementation using O(m+n) bits, for DFS and its classical applications (including testing for biconnectivity, and finding cut vertices and cut edges). Recently, Kammer et al. (MFCS 2016) gave an algorithm for testing biconnectivity using O(n + min{m, n lg lg n}) bits in linear time. In this paper, we consider O(n) bits implementations of the classical applications of DFS. These include the problem of finding cut vertices, and biconnected components, chain decomposition and st-numbering. Classical algorithms for them typically use DFS and some Omega(lg n) bits of information at each node. Our O(n)-bit implementations for these problems take O(m lg^c n lg lg n) time for some small constant c (c leq 3). Central to our implementation is a succinct representation of the DFS tree and a space efficient partitioning of the DFS tree into connected subtrees, which maybe of independent interest for space efficient graph algorithms.

Cite as

Sankardeep Chakraborty, Venkatesh Raman, and Srinivasa Rao Satti. Biconnectivity, Chain Decomposition and st-Numbering Using O(n) Bits. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 22:1-22:13, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{chakraborty_et_al:LIPIcs.ISAAC.2016.22,
  author =	{Chakraborty, Sankardeep and Raman, Venkatesh and Satti, Srinivasa Rao},
  title =	{{Biconnectivity, Chain Decomposition and st-Numbering Using O(n) Bits}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{22:1--22:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.22},
  URN =		{urn:nbn:de:0030-drops-67927},
  doi =		{10.4230/LIPIcs.ISAAC.2016.22},
  annote =	{Keywords: biconnectivity, st-number, chain decomposition, tree cover, space efficient algorithms, read-only memory}
}
Document
On (1, epsilon)-Restricted Max-Min Fair Allocation Problem

Authors: T-H. Hubert Chan, Zhihao Gavin Tang, and Xiaowei Wu


Abstract
We study the max-min fair allocation problem in which a set of m indivisible items are to be distributed among n agents such that the minimum utility among all agents is maximized. In the restricted setting, the utility of each item j on agent i is either 0 or some non-negative weight w_j. For this setting, Asadpour et al. [TALG, 2012] showed that a certain configuration-LP can be used to estimate the optimal value within a factor of 4 + delta, for any delta > 0, which was recently extended by Annamalai et al. [SODA 2015] to give a polynomial-time 13-approximation algorithm for the problem. For hardness results, Bezáková and Dani [SIGecom Exch., 2005] showed that it is NP-hard to approximate the problem within any ratio smaller than 2. In this paper we consider the (1, epsilon)-restricted max-min fair allocation problem, in which for some parameter epsilon in (0, 1), each item j is either heavy (w_j = 1) or light (w_j = epsilon). We show that the (1, epsilon)-restricted case is also NP-hard to approximate within any ratio smaller than 2. Hence, this simple special case is still algorithmically interesting. Using the configuration-LP, we are able to estimate the optimal value of the problem within a factor of 3 + delta, for any delta > 0. Extending this idea, we also obtain a quasi-polynomial time (3 + 4 epsilon)-approximation algorithm and a polynomial time 9-approximation algorithm. Moreover, we show that as epsilon tends to 0, the approximation ratio of our polynomial-time algorithm approaches 3 + 2 sqrt{2} approx 5.83.

Cite as

T-H. Hubert Chan, Zhihao Gavin Tang, and Xiaowei Wu. On (1, epsilon)-Restricted Max-Min Fair Allocation Problem. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 23:1-23:13, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{chan_et_al:LIPIcs.ISAAC.2016.23,
  author =	{Chan, T-H. Hubert and Tang, Zhihao Gavin and Wu, Xiaowei},
  title =	{{On (1, epsilon)-Restricted Max-Min Fair Allocation Problem}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{23:1--23:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.23},
  URN =		{urn:nbn:de:0030-drops-67939},
  doi =		{10.4230/LIPIcs.ISAAC.2016.23},
  annote =	{Keywords: Max-Min Fair Allocation, Hypergraph Matching}
}
Document
All-Pairs Shortest Paths in Unit-Disk Graphs in Slightly Subquadratic Time

Authors: Timothy M. Chan and Dimitrios Skrepetos


Abstract
In this paper we study the all-pairs shortest paths problem in (unweighted) unit-disk graphs. The previous best solution for this problem required O(n^2 log n) time, by running the O(n log n)-time single-source shortest path algorithm of Cabello and Jejcic [Comput. Geom., 2015] from every source vertex,where n is the number of vertices. We not only manage to eliminate the logarithmic factor, but also obtain the first (slightly) subquadratic algorithm for the problem, running in O(n^2 sqrt{ frac{log log n}{log n} }) time. Our algorithm computes an implicit representation of all the shortest paths, and, in the same amount of time, can also compute the diameter of the graph.

Cite as

Timothy M. Chan and Dimitrios Skrepetos. All-Pairs Shortest Paths in Unit-Disk Graphs in Slightly Subquadratic Time. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 24:1-24:13, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{chan_et_al:LIPIcs.ISAAC.2016.24,
  author =	{Chan, Timothy M. and Skrepetos, Dimitrios},
  title =	{{All-Pairs Shortest Paths in Unit-Disk Graphs in Slightly Subquadratic Time}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{24:1--24:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.24},
  URN =		{urn:nbn:de:0030-drops-67948},
  doi =		{10.4230/LIPIcs.ISAAC.2016.24},
  annote =	{Keywords: unit-disk graphs, all-pairs shortest paths, computational geometry}
}
Document
Sink Evacuation on Trees with Dynamic Confluent Flows

Authors: Di Chen and Mordecai Golin


Abstract
Let G = (V, E) be a graph modelling a building or road network in which edges have-both travel times (lengths) and capacities associated with them. An edge’s capacity is the number of people that can enter that edge in a unit of time. In emergencies, people evacuate towards the exits. If too many people try to evacuate through the same edge, congestion builds up and slows down the evacuation. Graphs with both lengths and capacities are known as Dynamic Flow networks. An evacuation plan for G consists of a choice of exit locations and a partition of the people at the vertices into groups, with each group evacuating to the same exit. The evacuation time of a plan is the time it takes until the last person evacuates. The k-sink evacuation problem is to provide an evacuation plan with k exit locations that minimizes the evacuation time. It is known that this problem is NP-Hard for general graphs but no polynomial time algorithm was previously known even for the case of G a tree. This paper presents an O(nk^2 log^5 n) algorithm for the k-sink evacuation problem on trees, which can also be applied to a more general class of problems.

Cite as

Di Chen and Mordecai Golin. Sink Evacuation on Trees with Dynamic Confluent Flows. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 25:1-25:13, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ISAAC.2016.25,
  author =	{Chen, Di and Golin, Mordecai},
  title =	{{Sink Evacuation on Trees with Dynamic Confluent Flows}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{25:1--25:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.25},
  URN =		{urn:nbn:de:0030-drops-67951},
  doi =		{10.4230/LIPIcs.ISAAC.2016.25},
  annote =	{Keywords: Sink Evacuation, Dynamic Flow, Facility Location, Parametric Search}
}
Document
Adaptivity vs. Postselection, and Hardness Amplification for Polynomial Approximation

Authors: Lijie Chen


Abstract
We study the following problem: with the power of postselection (classically or quantumly), what is your ability to answer adaptive queries to certain languages? More specifically, for what kind of computational classes C, we have P^C belongs to PostBPP or PostBQP? While a complete answer to the above question seems impossible given the development of present computational complexity theory. We study the analogous question in query complexity, which sheds light on the limitation of relativized methods (the relativization barrier) to the above question. Informally, we show that, for a partial function f, if there is no efficient small bounded-error algorithm for f classically or quantumly, then there is no efficient postselection bounded-error algorithm to answer adaptive queries to f classically or quantumly. Our results imply a new proof for the classical oracle separation P^{NP^O} notsubset PP^O, which is arguably more elegant. They also lead to a new oracle separation P^{SZK^O} notsubset PP^O, which is close to an oracle separation between SZK and PP - an open problem in the field of oracle separations. Our result also implies a hardness amplification construction for polynomial approximation: given a function f on n bits, we construct an adaptive-version of f, denoted by F, on O(m·n) bits, such that if f requires large degree to approximate to error 2/3 in a certain one-sided sense, then F requires large degree to approximate even to error 1/2 - 2^{-m}. Our construction achieves the same amplification in the work of Thaler (ICALP, 2016), by composing a function with O(log n) deterministic query complexity, which is in sharp contrast to all the previous results where the composing amplifiers are all hard functions in a certain sense.

Cite as

Lijie Chen. Adaptivity vs. Postselection, and Hardness Amplification for Polynomial Approximation. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 26:1-26:12, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{chen:LIPIcs.ISAAC.2016.26,
  author =	{Chen, Lijie},
  title =	{{Adaptivity vs. Postselection, and Hardness Amplification for Polynomial Approximation}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{26:1--26:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.26},
  URN =		{urn:nbn:de:0030-drops-67960},
  doi =		{10.4230/LIPIcs.ISAAC.2016.26},
  annote =	{Keywords: approximate degree, postselection, hardness amplification, adaptivity}
}
Document
Search on a Line by Byzantine Robots

Authors: Jurek Czyzowicz, Konstantinos Georgiou, Evangelos Kranakis, Danny Krizanc, Lata Narayanan, Jaroslav Opatrny, and Sunil Shende


Abstract
We consider the problem of fault-tolerant parallel search on an infinite line by n robots. Starting from the origin, the robots are required to find a target at an unknown location. The robots can move with maximum speed 1 and can communicate in wireless mode among themselves. However, among the n robots, there are f robots that exhibit byzantine faults. A faulty robot can fail to report the target even after reaching it, or it can make malicious claims about having found the target when in fact it has not. Given the presence of such faulty robots, the search for the target can only be concluded when the non-faulty robots have sufficient verification that the target has been found. We aim to design algorithms that minimize the value of S_d (n, f), the time to find a target at a distance d from the origin by n robots among which f are faulty. We give several different algorithms whose running time depends on the ratio f/n, the density of faulty robots, and also prove lower bounds. Our algorithms are optimal for some densities of faulty robots.

Cite as

Jurek Czyzowicz, Konstantinos Georgiou, Evangelos Kranakis, Danny Krizanc, Lata Narayanan, Jaroslav Opatrny, and Sunil Shende. Search on a Line by Byzantine Robots. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 27:1-27:12, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{czyzowicz_et_al:LIPIcs.ISAAC.2016.27,
  author =	{Czyzowicz, Jurek and Georgiou, Konstantinos and Kranakis, Evangelos and Krizanc, Danny and Narayanan, Lata and Opatrny, Jaroslav and Shende, Sunil},
  title =	{{Search on a Line by Byzantine Robots}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{27:1--27:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.27},
  URN =		{urn:nbn:de:0030-drops-67972},
  doi =		{10.4230/LIPIcs.ISAAC.2016.27},
  annote =	{Keywords: Cow path problem, Parallel search, Mobile robots, Wireless communication, Byzantine faults}
}
Document
Bipartite Matching with Linear Edge Weights

Authors: Nevzat Onur Domanic, Chi-Kit Lam, and C. Gregory Plaxton


Abstract
Consider a complete weighted bipartite graph G in which each left vertex u has two real numbers intercept and slope, each right vertex v has a real number quality, and the weight of any edge (u, v) is defined as the intercept of u plus the slope of u times the quality of v. Let m (resp., n) denote the number of left (resp., right) vertices, and assume that m geq n. We develop a fast algorithm for computing a maximum weight matching (MWM) of such a graph. Our algorithm begins by computing an MWM of the subgraph induced by the n right vertices and an arbitrary subset of n left vertices; this step is straightforward to perform in O(n log n) time. The remaining m - n left vertices are then inserted into the graph one at a time, in arbitrary order. As each left vertex is inserted, the MWM is updated. It is relatively straightforward to process each such insertion in O(n) time; our main technical contribution is to improve this time bound to O(sqrt{n} log^2 n). This result has an application related to unit-demand auctions. It is well known that the VCG mechanism yields a suitable solution (allocation and prices) for any unit-demand auction. The graph G may be viewed as encoding a special kind of unit-demand auction in which each left vertex u represents a unit-demand bid, each right vertex v represents an item, and the weight of an edge (u, v) represents the offer of bid u on item v. In this context, our fast insertion algorithm immediately provides an O(sqrt{n} log^2 n)-time algorithm for updating a VCG allocation when a new bid is received. We show how to generalize the insertion algorithm to update (an efficient representation of) the VCG prices within the same time bound.

Cite as

Nevzat Onur Domanic, Chi-Kit Lam, and C. Gregory Plaxton. Bipartite Matching with Linear Edge Weights. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 28:1-28:13, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{domanic_et_al:LIPIcs.ISAAC.2016.28,
  author =	{Domanic, Nevzat Onur and Lam, Chi-Kit and Plaxton, C. Gregory},
  title =	{{Bipartite Matching with Linear Edge Weights}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{28:1--28:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.28},
  URN =		{urn:nbn:de:0030-drops-67989},
  doi =		{10.4230/LIPIcs.ISAAC.2016.28},
  annote =	{Keywords: Weighted bipartite matching, Unit-demand auctions, VCG allocation and pricing}
}
Document
Raising Permutations to Powers in Place

Authors: Hicham El-Zein, J. Ian Munro, and Matthew Robertson


Abstract
Given a permutation of n elements, stored as an array, we address the problem of replacing the permutation by its kth power. We aim to perform this operation quickly using o(n) bits of extra storage. To this end, we first present an algorithm for inverting permutations that uses O(lg^2 n) additional bits and runs in O(n lg n) worst case time. This result is then generalized to the situation in which the permutation is to be replaced by its kth power. An algorithm whose worst case running time is O(n lg n) and uses O(lg^2 n + min{k lg n, n^{3/4 + epsilon}}) additional bits is presented.

Cite as

Hicham El-Zein, J. Ian Munro, and Matthew Robertson. Raising Permutations to Powers in Place. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 29:1-29:12, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{elzein_et_al:LIPIcs.ISAAC.2016.29,
  author =	{El-Zein, Hicham and Munro, J. Ian and Robertson, Matthew},
  title =	{{Raising Permutations to Powers in Place}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{29:1--29:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.29},
  URN =		{urn:nbn:de:0030-drops-67992},
  doi =		{10.4230/LIPIcs.ISAAC.2016.29},
  annote =	{Keywords: Algorithms, Combinatorics, Inplace, Permutations, Powers}
}
Document
Space-Efficient Plane-Sweep Algorithms

Authors: Amr Elmasry and Frank Kammer


Abstract
We introduce space-efficient plane-sweep algorithms for basic planar geometric problems. It is assumed that the input is in a read-only array of n items and that the available workspace is Theta(s) bits, where lg n <= s <= n * lg n. Three techniques that can be used as general tools in different space-efficient algorithms are introduced and employed within our algorithms. In particular, we give an almost-optimal algorithm for finding the closest pair among a set of n points that runs in O(n^2 /s + n * lg s) time. We also give a simple algorithm to enumerate the intersections of n line segments that runs in O((n^2 /s^{2/3}) * lg s + k) time, where k is the number of intersections. The counting version can be solved in O((n^2/s^{2/3}) * lg s) time. When the segments are axis-parallel, we give an O((n^2/s) * lg^{4/3} s + n^{4/3} * lg^{1/3} n)-time algorithm that counts the intersections and an O((n^2/s) * lg s * lg lg s + n * lg s + k)-time algorithm that enumerates the intersections, where k is the number of intersections. We finally present an algorithm that runs in O((n^2 /s + n * lg s) * sqrt{(n/s) * lg n}) time to calculate Klee's measure of axis-parallel rectangles.

Cite as

Amr Elmasry and Frank Kammer. Space-Efficient Plane-Sweep Algorithms. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 30:1-30:13, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{elmasry_et_al:LIPIcs.ISAAC.2016.30,
  author =	{Elmasry, Amr and Kammer, Frank},
  title =	{{Space-Efficient Plane-Sweep Algorithms}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{30:1--30:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.30},
  URN =		{urn:nbn:de:0030-drops-68009},
  doi =		{10.4230/LIPIcs.ISAAC.2016.30},
  annote =	{Keywords: closest pair, line-segments intersection, Klee's measure}
}
Document
Linear Kernels and Linear-Time Algorithms for Finding Large Cuts

Authors: Michael Etscheid and Matthias Mnich


Abstract
The maximum cut problem in graphs and its generalizations are fundamental combinatorial problems. Several of these cut problems were recently shown to be fixed-parameter tractable and admit polynomial kernels when parameterized above the tight lower bound measured by the size and order of the graph. In this paper we continue this line of research and considerably improve several of those results: * We show that an algorithm by Crowston et al. [ICALP 2012] for (Signed) Max-Cut Above Edwards-Erdos Bound can be implemented in such a way that it runs in linear time 8^k · O(m); this significantly improves the previous analysis with run time 8^k · O(n^4). * We give an asymptotically optimal kernel for (Signed) Max-Cut Above Edwards-Erdos Bound with O(k) vertices, improving a kernel with O(k^3) vertices by Crowston et al. [COCOON 2013]. * We improve all known kernels for strongly lambda-extendable properties parameterized above tight lower bound by Crowston et al. [FSTTCS 2013] from O(k^3) vertices to O(k) vertices. * As a consequence, Max Acyclic Subdigraph parameterized above Poljak-Turzik bound admits a kernel with O(k) vertices and can be solved in time 2^{O(k)} * n^{O(1)} ; this answers an open question by Crowston et al. [FSTTCS 2012]. All presented kernels can be computed in time O(km).

Cite as

Michael Etscheid and Matthias Mnich. Linear Kernels and Linear-Time Algorithms for Finding Large Cuts. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 31:1-31:13, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{etscheid_et_al:LIPIcs.ISAAC.2016.31,
  author =	{Etscheid, Michael and Mnich, Matthias},
  title =	{{Linear Kernels and Linear-Time Algorithms for Finding Large Cuts}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{31:1--31:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.31},
  URN =		{urn:nbn:de:0030-drops-68016},
  doi =		{10.4230/LIPIcs.ISAAC.2016.31},
  annote =	{Keywords: Max-Cut, fixed-parameter tractability, kernelization}
}
Document
Universal Guard Problems

Authors: Sándor P. Fekete, Qian Li, Joseph S. B. Mitchell, and Christian Scheffer


Abstract
We provide a spectrum of results for the Universal Guard Problem, in which one is to obtain a small set of points ("guards") that are "universal" in their ability to guard any of a set of possible polygonal domains in the plane. We give upper and lower bounds on the number of universal guards that are always sufficient to guard all polygons having a given set of n vertices, or to guard all polygons in a given set of k polygons on an n-point vertex set. Our upper bound proofs include algorithms to construct universal guard sets of the respective cardinalities.

Cite as

Sándor P. Fekete, Qian Li, Joseph S. B. Mitchell, and Christian Scheffer. Universal Guard Problems. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 32:1-32:13, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{fekete_et_al:LIPIcs.ISAAC.2016.32,
  author =	{Fekete, S\'{a}ndor P. and Li, Qian and Mitchell, Joseph S. B. and Scheffer, Christian},
  title =	{{Universal Guard Problems}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{32:1--32:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.32},
  URN =		{urn:nbn:de:0030-drops-68022},
  doi =		{10.4230/LIPIcs.ISAAC.2016.32},
  annote =	{Keywords: Art Gallery Problem, universal guarding, polygonization, worst-case bounds, robust covering}
}
Document
Fast Approximation Algorithms for the Generalized Survivable Network Design Problem

Authors: Andreas Emil Feldmann, Jochen Könemann, Kanstantsin Pashkovich, and Laura Sanità


Abstract
In a standard f-connectivity network design problem, we are given an undirected graph G = (V, E), a cut-requirement function f : 2^V to N, and non-negative costs c(e) for all e in E. We are then asked to find a minimum-cost vector x in N^E such that x(delta(S)) geq f (S) for all S subseteq V. We focus on the class of such problems where f is a proper function. This encodes many well-studied NP-hard problems such as the generalized survivable network design problem. In this paper we present the first strongly polynomial time FPTAS for solving the LP relaxation of the standard IP formulation of the f-connectivity problem with general proper functions f. Implementing Jain’s algorithm, this yields a strongly polynomial time (2 + epsilon)-approximation for the generalized survivable network design problem (where we consider rounding up of rationals an arithmetic operation).

Cite as

Andreas Emil Feldmann, Jochen Könemann, Kanstantsin Pashkovich, and Laura Sanità. Fast Approximation Algorithms for the Generalized Survivable Network Design Problem. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 33:1-33:12, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{feldmann_et_al:LIPIcs.ISAAC.2016.33,
  author =	{Feldmann, Andreas Emil and K\"{o}nemann, Jochen and Pashkovich, Kanstantsin and Sanit\`{a}, Laura},
  title =	{{Fast Approximation Algorithms for the Generalized Survivable Network Design Problem}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{33:1--33:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.33},
  URN =		{urn:nbn:de:0030-drops-68035},
  doi =		{10.4230/LIPIcs.ISAAC.2016.33},
  annote =	{Keywords: strongly polynomial runtime, generalized survivable network design, primal-dual method}
}
Document
Space-Time Trade-Offs for the Shortest Unique Substring Problem

Authors: Arnab Ganguly, Wing-Kai Hon, Rahul Shah, and Sharma V. Thankachan


Abstract
Given a string X[1, n] and a position k in [1, n], the Shortest Unique Substring of X covering k, denoted by S_k, is a substring X[i, j] of X which satisfies the following conditions: (i) i leq k leq j, (ii) i is the only position where there is an occurrence of X[i, j], and (iii) j - i is minimized. The best-known algorithm [Hon et al., ISAAC 2015] can find S k for all k in [1, n] in time O(n) using the string X and additional 2n words of working space. Let tau be a given parameter. We present the following new results. For any given k in [1, n], we can compute S_k via a deterministic algorithm in O(n tau^2 log n tau) time using X and additional O(n/tau) words of working space. For every k in [1, n], we can compute S_k via a deterministic algorithm in O(n tau^2 log n/tau) time using X and additional O(n/tau) words and 4n + o(n) bits of working space. For both problems above, we present an O(n tau log^{c+1} n)-time randomized algorithm that uses n/ log c n words in addition to that mentioned above, where c geq 0 is an arbitrary constant. In this case, the reported string is unique and covers k, but with probability at most n^{-O(1)} , may not be the shortest. As a consequence of our techniques, we also obtain similar space-and-time tradeoffs for a related problem of finding Maximal Unique Matches of two strings [Delcher et al., Nucleic Acids Res. 1999].

Cite as

Arnab Ganguly, Wing-Kai Hon, Rahul Shah, and Sharma V. Thankachan. Space-Time Trade-Offs for the Shortest Unique Substring Problem. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 34:1-34:13, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{ganguly_et_al:LIPIcs.ISAAC.2016.34,
  author =	{Ganguly, Arnab and Hon, Wing-Kai and Shah, Rahul and Thankachan, Sharma V.},
  title =	{{Space-Time Trade-Offs for the Shortest Unique Substring Problem}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{34:1--34:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.34},
  URN =		{urn:nbn:de:0030-drops-68041},
  doi =		{10.4230/LIPIcs.ISAAC.2016.34},
  annote =	{Keywords: Suffix Tree, Sparsification, Rabin-Karp Fingerprint, Probabilistic z-Fast Trie, Succinct Data-Structures}
}
Document
The Subset Assignment Problem for Data Placement in Caches

Authors: Shahram Ghandeharizadeh, Sandy Irani, and Jenny Lam


Abstract
We introduce the subset assignment problem in which items of varying sizes are placed in a set of bins with limited capacity. Items can be replicated and placed in any subset of the bins. Each (item, subset) pair has an associated cost. Not assigning an item to any of the bins is not free in general and can potentially be the most expensive option. The goal is to minimize the total cost of assigning items to subsets without exceeding the bin capacities. This problem is motivated by the design of caching systems composed of banks of memory with varying cost/performance specifications. The ability to replicate a data item in more than one memory bank can benefit the overall performance of the system with a faster recovery time in the event of a memory failure. For this setting, the number n of data objects (items) is very large and the number d of memory banks (bins) is a small constant (on the order of 3 or 4). Therefore, the goal is to determine an optimal assignment in time that minimizes dependence on n. The integral version of this problem is NP-hard since it is a generalization of the knapsack problem. We focus on an efficient solution to the LP relaxation as the number of fractionally assigned items will be at most d. If the data objects are small with respect to the size of the memory banks, the effect of excluding the fractionally assigned data items from the cache will be small. We give an algorithm that solves the LP relaxation and runs in time O(binom{3^d}{d+1} poly(d) n log(n) log(nC) log(Z)), where Z is the maximum item size and C the maximum storage cost.

Cite as

Shahram Ghandeharizadeh, Sandy Irani, and Jenny Lam. The Subset Assignment Problem for Data Placement in Caches. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 35:1-35:12, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{ghandeharizadeh_et_al:LIPIcs.ISAAC.2016.35,
  author =	{Ghandeharizadeh, Shahram and Irani, Sandy and Lam, Jenny},
  title =	{{The Subset Assignment Problem for Data Placement in Caches}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{35:1--35:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.35},
  URN =		{urn:nbn:de:0030-drops-68058},
  doi =		{10.4230/LIPIcs.ISAAC.2016.35},
  annote =	{Keywords: Memory management, caching, simplex method, linear programming, min-cost flow}
}
Document
A Gap Trichotomy for Boolean Constraint Problems: Extending Schaefer's Theorem

Authors: Lucy Ham


Abstract
In this paper, we investigate "gap problems", which are promise problems where YES instances are flexibly satisfiable in a certain sense, and NO instances are not satisfiable at all. These gap problems generalise a family of constraint-related decision problems, including the constraint satisfaction problem itself, the separation problem (can distinct variables be validly assigned distinct values?) and the 2-robust satisfiability problem (does any assignment on two variables extend to a full satisfying assignment?). We establish a Gap Trichotomy Theorem, which on Boolean domains, completely classifies the complexity of the gap problems considered. As a consequence, we obtain several well-known dichotomy results, as well as dichotomies for the separation problem and the 2-robust satisfiability problem: all are either polynomial-time tractable or NP-complete. Schaefer’s original dichotomy is a notable particular case.

Cite as

Lucy Ham. A Gap Trichotomy for Boolean Constraint Problems: Extending Schaefer's Theorem. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 36:1-36:12, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{ham:LIPIcs.ISAAC.2016.36,
  author =	{Ham, Lucy},
  title =	{{A Gap Trichotomy for Boolean Constraint Problems: Extending Schaefer's Theorem}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{36:1--36:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.36},
  URN =		{urn:nbn:de:0030-drops-68060},
  doi =		{10.4230/LIPIcs.ISAAC.2016.36},
  annote =	{Keywords: Constraint Satisfaction Problem, Robust satisfiability, Clone theory, Dichotomy, Trichotomy, Boolean}
}
Document
Sliding Tokens on a Cactus

Authors: Duc A. Hoang and Ryuhei Uehara


Abstract
Given two independent sets I and J of a graph G, imagine that a token (coin) is placed on each vertex in I. Then, the Sliding Token problem asks if one could transforms I to J using a sequence of elementary steps, where each step requires sliding a token from one vertex to one of its neighbors, such that the resulting set of vertices where tokens are placed still remains independent. In this paper, we describe a polynomial-time algorithm for solving Sliding Token in case the graph G is a cactus. Our algorithm is designed based on two observations. First, all structures that forbid the existence of a sequence of token slidings between I and J, if exist, can be found in polynomial time. A no-instance may be easily deduced using this characterization. Second, without such forbidden structures, a sequence of token slidings between I and J does exist.

Cite as

Duc A. Hoang and Ryuhei Uehara. Sliding Tokens on a Cactus. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 37:1-37:26, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{hoang_et_al:LIPIcs.ISAAC.2016.37,
  author =	{Hoang, Duc A. and Uehara, Ryuhei},
  title =	{{Sliding Tokens on a Cactus}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{37:1--37:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.37},
  URN =		{urn:nbn:de:0030-drops-68074},
  doi =		{10.4230/LIPIcs.ISAAC.2016.37},
  annote =	{Keywords: reconfiguration problem, token sliding, independent set, cactus}
}
Document
Complexity of Distributions and Average-Case Hardness

Authors: Dmitry Itsykson, Alexander Knop, and Dmitry Sokolov


Abstract
We address the following question in the average-case complexity: does there exists a language L such that for all easy distributions D the distributional problem (L, D) is easy on the average while there exists some more hard distribution D' such that (L, D') is hard on the average? We consider two complexity measures of distributions: the complexity of sampling and the complexity of computing the distribution function. For the complexity of sampling of distribution, we establish a connection between the above question and the hierarchy theorem for sampling distribution recently studied by Thomas Watson. Using this connection we prove that for every 0 < a < b there exist a language L, an ensemble of distributions D samplable in n^{log^b n} steps and a linear-time algorithm A such that for every ensemble of distribution F that samplable in n^{log^a n} steps, A correctly decides L on all inputs from {0, 1}^n except for a set that has infinitely small F-measure, and for every algorithm B there are infinitely many n such that the set of all elements of {0, 1}^n for which B correctly decides L has infinitely small D-measure. In case of complexity of computing the distribution function we prove the following tight result: for every a > 0 there exist a language L, an ensemble of polynomial-time computable distributions D, and a linear-time algorithm A such that for every computable in n^a steps ensemble of distributions F , A correctly decides L on all inputs from {0, 1}^n except for a set that has F-measure at most 2^{-n/2} , and for every algorithm B there are infinitely many n such that the set of all elements of {0, 1}^n for which B correctly decides L has D-measure at most 2^{-n+1}.

Cite as

Dmitry Itsykson, Alexander Knop, and Dmitry Sokolov. Complexity of Distributions and Average-Case Hardness. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 38:1-38:12, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{itsykson_et_al:LIPIcs.ISAAC.2016.38,
  author =	{Itsykson, Dmitry and Knop, Alexander and Sokolov, Dmitry},
  title =	{{Complexity of Distributions and Average-Case Hardness}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{38:1--38:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.38},
  URN =		{urn:nbn:de:0030-drops-68083},
  doi =		{10.4230/LIPIcs.ISAAC.2016.38},
  annote =	{Keywords: average-case complexity, hierarchy theorem, sampling distributions, diagonalization}
}
Document
Computing the Pattern Waiting Time: A Revisit of the Intuitive Approach

Authors: Kai Jin


Abstract
We revisit the waiting time of patterns in repeated independent experiments. We show that the most intuitive approach for computing the waiting time, which reduces it to computing the stopping time of a Markov chain, is optimum from the perspective of computational complexity. For the single pattern case, this approach requires us to solve a system of m linear equations, where m denotes the length of the pattern. We show that this system can be solved in O(m + n) time, where n denotes the number of possible outcomes of each single experiment. The main procedure only costs O(m) time, while a preprocessing rocedure costs O(m + n) time. For the multiple pattern case, our approach is as efficient as the one given by Li [Ann. Prob., 1980]. Our method has several advantages over other methods. First, it extends to compute the variance or even higher moment of the waiting time for the single pattern case. Second, it is more intuitive and does not entail tedious mathematics and heavy probability theory. Our main result (Theorem 2) might be of independent interest to the theory of linear equations.

Cite as

Kai Jin. Computing the Pattern Waiting Time: A Revisit of the Intuitive Approach. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 39:1-39:12, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{jin:LIPIcs.ISAAC.2016.39,
  author =	{Jin, Kai},
  title =	{{Computing the Pattern Waiting Time: A Revisit of the Intuitive Approach}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{39:1--39:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.39},
  URN =		{urn:nbn:de:0030-drops-68096},
  doi =		{10.4230/LIPIcs.ISAAC.2016.39},
  annote =	{Keywords: Pattern Occurrence, Waiting Time, Penney’s Game, Markov Chain}
}
Document
O(f) Bi-Approximation for Capacitated Covering with Hard Capacities

Authors: Mong-Jen Kao, Hai-Lun Tu, and D. T. Lee


Abstract
We consider capacitated vertex cover with hard capacity constraints (VC-HC) on hypergraphs. In this problem we are given a hypergraph G = (V, E) with a maximum edge size f. Each edge is associated with a demand and each vertex is associated with a weight (cost), a capacity, and an available multiplicity. The objective is to find a minimum-weight vertex multiset such that the demands of the edges can be covered by the capacities of the vertices and the multiplicity of each vertex does not exceed its available multiplicity. In this paper we present an O(f) bi-approximation for VC-HC that gives a trade-off on the number of augmented multiplicity and the cost of the resulting cover. In particular, we show that, by augmenting the available multiplicity by a factor of k geq 2, a cover with a cost ratio of (1+ frac{1}{k - 1})(f - 1) to the optimal cover for the original instance can be obtained. This improves over a previous result, which has a cost ratio of f^2 via augmenting the available multiplicity by a factor of f.

Cite as

Mong-Jen Kao, Hai-Lun Tu, and D. T. Lee. O(f) Bi-Approximation for Capacitated Covering with Hard Capacities. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 40:1-40:12, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kao_et_al:LIPIcs.ISAAC.2016.40,
  author =	{Kao, Mong-Jen and Tu, Hai-Lun and Lee, D. T.},
  title =	{{O(f) Bi-Approximation for Capacitated Covering with Hard Capacities}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{40:1--40:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.40},
  URN =		{urn:nbn:de:0030-drops-68102},
  doi =		{10.4230/LIPIcs.ISAAC.2016.40},
  annote =	{Keywords: Capacitated Covering, Hard Capacities, Bi-criteria Approximation}
}
Document
Surrogate Optimization for p-Norms

Authors: Yasushi Kawase and Kazuhisa Makino


Abstract
In this paper, we study the effect of surrogate objective functions in optimization problems. We introduce surrogate ratio as a measure of such effect, where the surrogate ratio is the ratio between the optimal values of the original and surrogate objective functions. We prove that the surrogate ratio is at most mu^{|1/p - 1/q|} when the objective functions are p- and q-norms, and the feasible region is a mu-dimensional space (i.e., a subspace of R^mu), a mu-intersection of matroids, or a mu-extendible system. We also show that this is the best possible bound. In addition, for mu-systems, we demonstrate that the ratio becomes mu^{1/p} when p < q and unbounded if p > q. Here, a mu-system is an independence system such that for any subset of ground set the ratio of the cardinality of the largest to the smallest maximal independent subset of it is at most mu. We further extend our results to the surrogate ratios for approximate solutions.

Cite as

Yasushi Kawase and Kazuhisa Makino. Surrogate Optimization for p-Norms. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 41:1-41:13, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kawase_et_al:LIPIcs.ISAAC.2016.41,
  author =	{Kawase, Yasushi and Makino, Kazuhisa},
  title =	{{Surrogate Optimization for p-Norms}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{41:1--41:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.41},
  URN =		{urn:nbn:de:0030-drops-68118},
  doi =		{10.4230/LIPIcs.ISAAC.2016.41},
  annote =	{Keywords: surrogate optimization, matroid, extendible system, p-norm}
}
Document
Optimal Composition Ordering Problems for Piecewise Linear Functions

Authors: Yasushi Kawase, Kazuhisa Makino, and Kento Seimi


Abstract
In this paper, we introduce maximum composition ordering problems. The input is n real functions f_1 , ... , f_n : R to R and a constant c in R. We consider two settings: total and partial compositions. The maximum total composition ordering problem is to compute a permutation sigma : [n] to [n] which maximizes f_{sigma(n)} circ f_{sigma(n-1)} circ ... circ f_{sigma(1)}(c), where [n] = {1, ... , n}. The maximum partial composition ordering problem is to compute a permutation sigma : [n] to [n] and a nonnegative integer k (0 le k le n) which maximize f_{sigma(k)} circ f_{sigma(k-1)} circ ... circ f_{sigma(1)}(c). We propose O(n log n) time algorithms for the maximum total and partial composition ordering problems for monotone linear functions f_i , which generalize linear deterioration and shortening models for the time-dependent scheduling problem. We also show that the maximum partial composition ordering problem can be solved in polynomial time if f i is of the form max{a_i x + b_i , c_i } for some constants a_i (ge 0), b_i and c_i. As a corollary, we show that the two-valued free-order secretary problem can be solved in polynomial time. We finally prove that there exists no constant-factor approximation algorithm for the problems, even if f_i's are monotone, piecewise linear functions with at most two pieces, unless P=NP.

Cite as

Yasushi Kawase, Kazuhisa Makino, and Kento Seimi. Optimal Composition Ordering Problems for Piecewise Linear Functions. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 42:1-42:13, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kawase_et_al:LIPIcs.ISAAC.2016.42,
  author =	{Kawase, Yasushi and Makino, Kazuhisa and Seimi, Kento},
  title =	{{Optimal Composition Ordering Problems for Piecewise Linear Functions}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{42:1--42:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.42},
  URN =		{urn:nbn:de:0030-drops-68126},
  doi =		{10.4230/LIPIcs.ISAAC.2016.42},
  annote =	{Keywords: function composition, time-dependent scheduling}
}
Document
Additive Approximation Algorithms for Modularity Maximization

Authors: Yasushi Kawase, Tomomi Matsui, and Atsushi Miyauchi


Abstract
The modularity is a quality function in community detection, which was introduced by Newman and Girvan [Phys. Rev. E, 2004]. Community detection in graphs is now often conducted through modularity maximization: given an undirected graph G = (V, E), we are asked to find a partition C of V that maximizes the modularity. Although numerous algorithms have been developed to date, most of them have no theoretical approximation guarantee. Recently, to overcome this issue, the design of modularity maximization algorithms with provable approximation guarantees has attracted significant attention in the computer science community. In this study, we further investigate the approximability of modularity maximization. More specifically, we propose a polynomial-time (cos(frac{3 - sqrt{5}}{4} pi) - frac{1 - sqrt{5}}{8})-additive approximation algorithm for the modularity maximization problem. Note here that cos(frac{3 - sqrt{5}}{4} pi) - frac{1 - sqrt{5}}{8} < 0.42084 holds. This improves the current best additive approximation error of 0.4672, which was recently provided by Dinh, Li, and Thai (2015). Interestingly, our analysis also demonstrates that the proposed algorithm obtains a nearly-optimal solution for any instance with a high modularity value. Moreover, we propose a polynomial-time 0.16598-additive approximation algorithm for the maximum modularity cut problem. It should be noted that this is the first non-trivial approximability result for the problem. Finally, we demonstrate that our approximation algorithm can be extended to some related problems.

Cite as

Yasushi Kawase, Tomomi Matsui, and Atsushi Miyauchi. Additive Approximation Algorithms for Modularity Maximization. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 43:1-43:13, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kawase_et_al:LIPIcs.ISAAC.2016.43,
  author =	{Kawase, Yasushi and Matsui, Tomomi and Miyauchi, Atsushi},
  title =	{{Additive Approximation Algorithms for Modularity Maximization}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{43:1--43:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.43},
  URN =		{urn:nbn:de:0030-drops-68136},
  doi =		{10.4230/LIPIcs.ISAAC.2016.43},
  annote =	{Keywords: networks, community detection, modularity maximization, approxima- tion algorithms}
}
Document
The Densest Subgraph Problem with a Convex/Concave Size Function

Authors: Yasushi Kawase and Atsushi Miyauchi


Abstract
Given an edge-weighted undirected graph G = (V, E, w), the density of S subseteq V is defined as w(S)/|S|, where w(S) is the sum of weights of the edges in the subgraph induced by S. The densest subgraph problem asks for S subseteq V that maximizes the density w(S)/|S|. The problem has received significant attention recently because it can be solved exactly in polynomial time. However, the densest subgraph problem has a drawback; it may happen that the obtained subset is too large or too small in comparison with the desired size of the output. In this study, we address the size issue by generalizing the density of S subseteq V. Specifically, we introduce the f -density of S subseteq V, which is defined as w(S)/f (|S|), where f : Z geq 0 to R geq 0 is a monotonically non-decreasing function. In the f-densest subgraph problem (f-DS), we are asked to find S subseteq V that maximizes the f-density w(S)/f (|S|). Although f-DS does not explicitly specify the size of the output subset of vertices, we can handle the above size issue using a convex size function f or a concave size function f appropriately. For f-DS with convex function f, we propose a nearly-linear-time algorithm with a provable approximation guarantee. In particular, for f-DS with f(x) = x^alpha (alpha in [1, 2]), our algorithm has an approximation ratio of 2 · n^{(alpha-1)(2-alpha)}. On the other hand, for f-DS with concave function f , we propose a linear-programming-based polynomial-time exact algorithm. It should be emphasized that this algorithm obtains not only an optimal solution to the problem but also subsets of vertices corresponding to the extreme points of the upper convex hull of {(|S|, w(S)) | S subseteq V }, which we refer to as the dense frontier points. We also propose a flow-based combinatorial exact algorithm for unweighted graphs that runs in O(n^3) time. Finally, we propose a nearly-linear-time 3-approximation algorithm.

Cite as

Yasushi Kawase and Atsushi Miyauchi. The Densest Subgraph Problem with a Convex/Concave Size Function. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 44:1-44:12, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kawase_et_al:LIPIcs.ISAAC.2016.44,
  author =	{Kawase, Yasushi and Miyauchi, Atsushi},
  title =	{{The Densest Subgraph Problem with a Convex/Concave Size Function}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{44:1--44:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.44},
  URN =		{urn:nbn:de:0030-drops-68149},
  doi =		{10.4230/LIPIcs.ISAAC.2016.44},
  annote =	{Keywords: graphs, dense subgraph extraction, densest subgraph problem, approxi- mation algorithms}
}
Document
On the Classes of Interval Graphs of Limited Nesting and Count of Lengths

Authors: Pavel Klavík, Yota Otachi, and Jiri Šejnoha


Abstract
In 1969, Roberts introduced proper and unit interval graphs and proved that these classes are equal. Natural generalizations of unit interval graphs called k-length interval graphs were considered in which the number of different lengths of intervals is limited by k. Even after decades of research, no insight into their structure is known and the complexity of recognition is open even for k = 2. We propose generalizations of proper interval graphs called k-nested interval graphs in which there are no chains of k + 1 intervals nested in each other. It is easy to see that k-nested interval graphs are a superclass of k-length interval graphs. We give a linear-time recognition algorithm for k-nested interval graphs. This algorithm adds a missing piece to Gajarský et al. [FOCS 2015] to show that testing FO properties on interval graphs is FPT with respect to the nesting k and the length of the formula, while the problem is W[2]-hard when parameterized just by the length of the formula. Further, we show that a generalization of recognition called partial representation extension is polynomial-time solvable for k-nested interval graphs, while it is NP-hard for k-length interval graphs, even when k = 2.

Cite as

Pavel Klavík, Yota Otachi, and Jiri Šejnoha. On the Classes of Interval Graphs of Limited Nesting and Count of Lengths. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 45:1-45:13, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{klavik_et_al:LIPIcs.ISAAC.2016.45,
  author =	{Klav{\'\i}k, Pavel and Otachi, Yota and \v{S}ejnoha, Jiri},
  title =	{{On the Classes of Interval Graphs of Limited Nesting and Count of Lengths}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{45:1--45:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.45},
  URN =		{urn:nbn:de:0030-drops-68155},
  doi =		{10.4230/LIPIcs.ISAAC.2016.45},
  annote =	{Keywords: interval graphs, proper and unit interval graphs, recognition, partial representation extension}
}
Document
Pattern Matching and Consensus Problems on Weighted Sequences and Profiles

Authors: Tomasz Kociumaka, Solon P. Pissis, and Jakub Radoszewski


Abstract
We study pattern matching problems on two major representations of uncertain sequences used in molecular biology: weighted sequences (also known as position weight matrices, PWM) and profiles (i.e., scoring matrices). In the simple version, in which only the pattern or only the text is uncertain, we obtain efficient algorithms with theoretically-provable running times using a variation of the lookahead scoring technique. We also consider a general variant of the pattern matching problems in which both the pattern and the text are uncertain. Central to our solution is a special case where the sequences have equal length, called the consensus problem. We propose algorithms for the consensus problem parameterized by the number of strings that match one of the sequences. As our basic approach, a careful adaptation of the classic meet-in-the-middle algorithm for the knapsack problem is used. On the lower bound side, we prove that our dependence on the parameter is optimal up to lower-order terms conditioned on the optimality of the original algorithm for the knapsack problem.

Cite as

Tomasz Kociumaka, Solon P. Pissis, and Jakub Radoszewski. Pattern Matching and Consensus Problems on Weighted Sequences and Profiles. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 46:1-46:12, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kociumaka_et_al:LIPIcs.ISAAC.2016.46,
  author =	{Kociumaka, Tomasz and Pissis, Solon P. and Radoszewski, Jakub},
  title =	{{Pattern Matching and Consensus Problems on Weighted Sequences and Profiles}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{46:1--46:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.46},
  URN =		{urn:nbn:de:0030-drops-68166},
  doi =		{10.4230/LIPIcs.ISAAC.2016.46},
  annote =	{Keywords: weighted sequence, position weight matrix, profile matching}
}
Document
Hierarchical Time-Dependent Oracles

Authors: Spyros Kontogiannis, Dorothea Wagner, and Christos Zaroliagis


Abstract
We study networks obeying time-dependent min-cost path metrics, and present novel oracles for them which provably achieve two unique features: (i) subquadratic preprocessing time and space, independent of the metric’s amount of disconcavity; (ii) sublinear query time, in either the network size or the actual Dijkstra-Rank of the query at hand.

Cite as

Spyros Kontogiannis, Dorothea Wagner, and Christos Zaroliagis. Hierarchical Time-Dependent Oracles. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 47:1-47:13, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kontogiannis_et_al:LIPIcs.ISAAC.2016.47,
  author =	{Kontogiannis, Spyros and Wagner, Dorothea and Zaroliagis, Christos},
  title =	{{Hierarchical Time-Dependent Oracles}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{47:1--47:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.47},
  URN =		{urn:nbn:de:0030-drops-68170},
  doi =		{10.4230/LIPIcs.ISAAC.2016.47},
  annote =	{Keywords: Time-dependent shortest paths, FIFO property, Distance oracles}
}
Document
A Refined Definition for Groups of Moving Entities and its Computation

Authors: Marc van Kreveld, Maarten Löffler, Frank Staals, and Lionov Wiratma


Abstract
One of the important tasks in the analysis of spatio-temporal data collected from moving entities is to find a group: a set of entities that travel together for a sufficiently long period of time. Buchin et al. [JoCG, 2015] introduce a formal definition of groups, analyze its mathematical structure, and present efficient algorithms for computing all maximal groups in a given set of trajectories. In this paper, we refine their definition and argue that our proposed definition corresponds better to human intuition in certain cases, particularly in dense environments. We present algorithms to compute all maximal groups from a set of moving entities according to the new definition. For a set of n moving entities in R^1, specified by linear interpolation in a sequence of tau time stamps, we show that all maximal groups can be computed in O(tau^2 n^4) time. A similar approach applies if the time stamps of entities are not the same, at the cost of a small extra factor of alpha(n) in the running time. In higher dimensions, we can compute all maximal groups in O(tau^2 n^5 log n) time (for any constant number of dimensions). We also show that one tau factor can be traded for a much higher dependence on n by giving a O(tau n^4 2^n) algorithm for the same problem. Consequently, we give a linear-time algorithm when the number of entities is constant and the input size relates to the number of time stamps of each entity. Finally, we provide a construction to show that it might be difficult to develop an algorithm with polynomial dependence on n and linear dependence on tau.

Cite as

Marc van Kreveld, Maarten Löffler, Frank Staals, and Lionov Wiratma. A Refined Definition for Groups of Moving Entities and its Computation. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 48:1-48:12, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{vankreveld_et_al:LIPIcs.ISAAC.2016.48,
  author =	{van Kreveld, Marc and L\"{o}ffler, Maarten and Staals, Frank and Wiratma, Lionov},
  title =	{{A Refined Definition for Groups of Moving Entities and its Computation}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{48:1--48:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.48},
  URN =		{urn:nbn:de:0030-drops-68188},
  doi =		{10.4230/LIPIcs.ISAAC.2016.48},
  annote =	{Keywords: moving entities, trajectories, grouping, computational geometry}
}
Document
A Sidetrack-Based Algorithm for Finding the k Shortest Simple Paths in a Directed Graph

Authors: Denis Kurz and Petra Mutzel


Abstract
We present an algorithm for the k shortest simple path problem on weighted directed graphs (kSSP) that is based on Eppstein’s algorithm for a similar problem in which paths are allowed to contain cycles. In contrast to most other algorithms for kSSP, ours is not based on Yen's algorithm [Networks, 1971] and does not solve replacement path problems. Its worst-case running time is on par with state-of-the-art algorithms for kSSP. Using our algorithm, one may find O(m) simple paths with a single shortest path tree computation and O(n+m) additional time per path in well-behaved cases, where n is the number of nodes and m is the number of edges. Our computational results show that on random graphs and large road networks, these well-behaved cases are quite common and our algorithm is faster than existing algorithms by an order of magnitude.

Cite as

Denis Kurz and Petra Mutzel. A Sidetrack-Based Algorithm for Finding the k Shortest Simple Paths in a Directed Graph. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 49:1-49:13, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kurz_et_al:LIPIcs.ISAAC.2016.49,
  author =	{Kurz, Denis and Mutzel, Petra},
  title =	{{A Sidetrack-Based Algorithm for Finding the k Shortest Simple Paths in a Directed Graph}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{49:1--49:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.49},
  URN =		{urn:nbn:de:0030-drops-68199},
  doi =		{10.4230/LIPIcs.ISAAC.2016.49},
  annote =	{Keywords: directed graph, k-best, shortest path, simple path, weighted graph}
}
Document
On the Complexity of Matching Cut in Graphs of Fixed Diameter

Authors: Hoang-Oanh Le and Van Bang Le


Abstract
In a graph, a matching cut is an edge cut that is a matching. Matching Cut is the problem of deciding whether or not a given graph has a matching cut, which is known to be NP-complete even when restricted to bipartite graphs. It has been proved that Matching Cut is polynomially solvable for graphs of diameter two. In this paper, we show that, for any fixed integer d geq 4, Matching Cut is NP-complete in the class of graphs of diameter d. This almost resolves an open problem posed by Borowiecki and Jesse-Józefczyk in [Matching cutsets in graphs of diameter 2, Theoretical Computer Science 407 (2008) 574-582]. We then show that, for any fixed integer d geq 5, Matching Cut is NP-complete even when restricted to the class of bipartite graphs of diameter d. Complementing the hardness results, we show that Matching Cut is in polynomial-time solvable in the class of bipartite graphs of diameter at most three, and point out a new and simple polynomial-time algorithm solving Matching Cut in graphs of diameter 2.

Cite as

Hoang-Oanh Le and Van Bang Le. On the Complexity of Matching Cut in Graphs of Fixed Diameter. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 50:1-50:12, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{le_et_al:LIPIcs.ISAAC.2016.50,
  author =	{Le, Hoang-Oanh and Le, Van Bang},
  title =	{{On the Complexity of Matching Cut in Graphs of Fixed Diameter}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{50:1--50:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.50},
  URN =		{urn:nbn:de:0030-drops-68205},
  doi =		{10.4230/LIPIcs.ISAAC.2016.50},
  annote =	{Keywords: matching cut, NP-hardness, graph algorithm, computational complexity, decomposable graph}
}
Document
On the Optimality of Tape Merge of Two Lists with Similar Size

Authors: Qian Li, Xiaoming Sun, and Jialin Zhang


Abstract
The problem of merging sorted lists in the least number of pairwise comparisons has been solved completely only for a few special cases. Graham and Karp [TAOCP, 1999] independently discovered that the tape merge algorithm is optimal in the worst case when the two lists have the same size. Stockmeyer and Yao [SICOMP, 1980], Murphy and Paull [Inform. Control, 1979], and Christen [1978] independently showed when the lists to be merged are of size m and n satisfying m leq n leq floor(3/2 m) + 1, the tape merge algorithm is optimal in the worst case. This paper extends this result by showing that the tape merge algorithm is optimal in the worst case whenever the size of one list is no larger than 1.52 times the size of the other. The main tool we used to prove lower bounds is Knuth’s adversary methods [TAOCP, 1999]. In addition, we show that the lower bound cannot be improved to 1.8 via Knuth's adversary methods. We also develop a new inequality about Knuth's adversary methods, which might be interesting in its own right. Moreover, we design a simple procedure to achieve constant improvement of the upper bounds for 2m - 2 leq n leq 3m.

Cite as

Qian Li, Xiaoming Sun, and Jialin Zhang. On the Optimality of Tape Merge of Two Lists with Similar Size. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 51:1-51:17, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.ISAAC.2016.51,
  author =	{Li, Qian and Sun, Xiaoming and Zhang, Jialin},
  title =	{{On the Optimality of Tape Merge of Two Lists with Similar Size}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{51:1--51:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.51},
  URN =		{urn:nbn:de:0030-drops-68219},
  doi =		{10.4230/LIPIcs.ISAAC.2016.51},
  annote =	{Keywords: comparison-based sorting, tape merge, optimal sort, adversary method}
}
Document
Dispersing Points on Intervals

Authors: Shimin Li and Haitao Wang


Abstract
We consider a problem of dispersing points on disjoint intervals on a line. Given n pairwise disjoint intervals sorted on a line, we want to find a point in each interval such that the minimum pairwise distance of these points is maximized. Based on a greedy strategy, we present a linear time algorithm for the problem. Further, we also solve in linear time the cycle version of the problem where the intervals are given on a cycle.

Cite as

Shimin Li and Haitao Wang. Dispersing Points on Intervals. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 52:1-52:12, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.ISAAC.2016.52,
  author =	{Li, Shimin and Wang, Haitao},
  title =	{{Dispersing Points on Intervals}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{52:1--52:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.52},
  URN =		{urn:nbn:de:0030-drops-68248},
  doi =		{10.4230/LIPIcs.ISAAC.2016.52},
  annote =	{Keywords: dispersing points, intervals, min-max, algorithms, cycles}
}
Document
Optimal Nonpreemptive Scheduling in a Smart Grid Model

Authors: Fu-Hong Liu, Hsiang-Hsuan Liu, and Prudence W. H. Wong


Abstract
We study a scheduling problem arising in demand response management in smart grid. Consumers send in power requests with a flexible feasible time interval during which their requests can be served. The grid controller, upon receiving power requests, schedules each request within the specified interval. The electricity cost is measured by a convex function of the load in each timeslot. The objective is to schedule all requests with the minimum total electricity cost. Previous work has studied cases where jobs have unit power requirement and unit duration. We extend the study to arbitrary power requirement and duration, which has been shown to be NP-hard. We give the first online algorithm for the general problem, and prove that the worst case competitive ratio is asymptotically optimal. We also prove that the problem is fixed parameter tractable. Due to space limit, the missing proofs are presented in the full paper.

Cite as

Fu-Hong Liu, Hsiang-Hsuan Liu, and Prudence W. H. Wong. Optimal Nonpreemptive Scheduling in a Smart Grid Model. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 53:1-53:13, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{liu_et_al:LIPIcs.ISAAC.2016.53,
  author =	{Liu, Fu-Hong and Liu, Hsiang-Hsuan and Wong, Prudence W. H.},
  title =	{{Optimal Nonpreemptive Scheduling in a Smart Grid Model}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{53:1--53:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.53},
  URN =		{urn:nbn:de:0030-drops-68252},
  doi =		{10.4230/LIPIcs.ISAAC.2016.53},
  annote =	{Keywords: Scheduling, Smart Grid, Convex function cost, Fixed parameter tractable, Online algorithms, Non-preemptive}
}
Document
Distributed and Robust Support Vector Machine

Authors: Yangwei Liu, Hu Ding, Ziyun Huang, and Jinhui Xu


Abstract
In this paper, we consider the distributed version of Support Vector Machine (SVM) under the coordinator model, where all input data (i.e., points in R^d space) of SVM are arbitrarily distributed among k nodes in some network with a coordinator which can communicate with all nodes. We investigate two variants of this problem, with and without outliers. For distributed SVM without outliers, we prove a lower bound on the communication complexity and give a distributed (1-epsilon)-approximation algorithm to reach this lower bound, where epsilon is a user specified small constant. For distributed SVM with outliers, we present a (1-epsilon)-approximation algorithm to explicitly remove the influence of outliers. Our algorithm is based on a deterministic distributed top t selection algorithm with communication complexity of O(k log (t)) in the coordinator model. Experimental results on benchmark datasets confirm the theoretical guarantees of our algorithms.

Cite as

Yangwei Liu, Hu Ding, Ziyun Huang, and Jinhui Xu. Distributed and Robust Support Vector Machine. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 54:1-54:13, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{liu_et_al:LIPIcs.ISAAC.2016.54,
  author =	{Liu, Yangwei and Ding, Hu and Huang, Ziyun and Xu, Jinhui},
  title =	{{Distributed and Robust Support Vector Machine}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{54:1--54:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.54},
  URN =		{urn:nbn:de:0030-drops-68221},
  doi =		{10.4230/LIPIcs.ISAAC.2016.54},
  annote =	{Keywords: Distributed Algorithm, Communication Complexity, Robust Algorithm, SVM}
}
Document
Single Machine Scheduling with Job-Dependent Machine Deterioration

Authors: Wenchang Luo, Yao Xu, Weitian Tong, and Guohui Lin


Abstract
We consider the single machine scheduling problem with job-dependent machine deterioration. In the problem, we are given a single machine with an initial non-negative maintenance level, and a set of jobs each with a non-preemptive processing time and a machine deterioration. Such a machine deterioration quantifies the decrement in the machine maintenance level after processing the job. To avoid machine breakdown, one should guarantee a non-negative maintenance level at any time point; and whenever necessary, a maintenance activity must be allocated for restoring the machine maintenance level. The goal of the problem is to schedule the jobs and the maintenance activities such that the total completion time of jobs is minimized. There are two variants of maintenance activities: in the partial maintenance case each activity can be allocated to increase the machine maintenance level to any level not exceeding the maximum; in the full maintenance case every activity must be allocated to increase the machine maintenance level to the maximum. In a recent work, the problem in the full maintenance case has been proven NP-hard; several special cases of the problem in the partial maintenance case were shown solvable in polynomial time, but the complexity of the general problem is left open. In this paper we first prove that the problem in the partial maintenance case is NP-hard, thus settling the open problem; we then design a 2-approximation algorithm.

Cite as

Wenchang Luo, Yao Xu, Weitian Tong, and Guohui Lin. Single Machine Scheduling with Job-Dependent Machine Deterioration. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 55:1-55:13, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{luo_et_al:LIPIcs.ISAAC.2016.55,
  author =	{Luo, Wenchang and Xu, Yao and Tong, Weitian and Lin, Guohui},
  title =	{{Single Machine Scheduling with Job-Dependent Machine Deterioration}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{55:1--55:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.55},
  URN =		{urn:nbn:de:0030-drops-68236},
  doi =		{10.4230/LIPIcs.ISAAC.2016.55},
  annote =	{Keywords: Scheduling, machine deterioration, maintenance, NP-hard, approxima- tion algorithm}
}
Document
Approximation Algorithms for Capacitated k-Travelling Repairmen Problems

Authors: Christopher S. Martin and Mohammad R. Salavatipour


Abstract
We study variants of the capacitated vehicle routing problem. In the multiple depot capacitated k-travelling repairmen problem (MD-CkTRP), we have a collection of clients to be served by one vehicle in a fleet of k identical vehicles based at given depots. Each client has a given demand that must be satisfied, and each vehicle can carry a total of at most Q demand before it must resupply at its original depot. We wish to route the vehicles in a way that obeys the constraints while minimizing the average time (latency) required to serve a client. This generalizes the Multi-depot k-Travelling Repairman Problem (MD-kTRP) [Chekuri and Kumar, IEEE-FOCS, 2003; Post and Swamy, ACM-SIAM SODA, 2015] to the capacitated vehicle setting, and while it has been previously studied [Lysgaard and Wohlk, EJOR, 2014; Rivera et al, Comput Optim Appl, 2015], no approximation algorithm with a proven ratio is known. We give a 42.49-approximation to this general problem, and refine this constant to 25.49 when clients have unit demands. As far as we are aware, these are the first constant-factor approximations for capacitated vehicle routing problems with a latency objective. We achieve these results by developing a framework allowing us to solve a wider range of latency problems, and crafting various orienteering-style oracles for use in this framework. We also show a simple LP rounding algorithm has a better approximation ratio for the maximum coverage problem with groups (MCG), first studied by Chekuri and Kumar [APPROX, 2004], and use it as a subroutine in our framework. Our approximation ratio for MD-CkTRP when restricted to uncapacitated setting matches the best known bound for it [Post and Swamy, ACM-SIAM SODA, 2015]. With our framework, any improvements to our oracles or our MCG approximation will result in improved approximations to the corresponding k-TRP problem.

Cite as

Christopher S. Martin and Mohammad R. Salavatipour. Approximation Algorithms for Capacitated k-Travelling Repairmen Problems. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 56:1-56:12, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{martin_et_al:LIPIcs.ISAAC.2016.56,
  author =	{Martin, Christopher S. and Salavatipour, Mohammad R.},
  title =	{{Approximation Algorithms for Capacitated k-Travelling Repairmen Problems}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{56:1--56:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.56},
  URN =		{urn:nbn:de:0030-drops-68262},
  doi =		{10.4230/LIPIcs.ISAAC.2016.56},
  annote =	{Keywords: approximation, capacitated, latency, group coverage}
}
Document
Scaling and Proximity Properties of Integrally Convex Functions

Authors: Satoko Moriguchi, Kazuo Murota, Akihisa Tamura, and Fabio Tardella


Abstract
In discrete convex analysis, the scaling and proximity properties for the class of L^natural-convex functions were established more than a decade ago and have been used to design efficient minimization algorithms. For the larger class of integrally convex functions of n variables, we show here that the scaling property only holds when n leq 2, while a proximity theorem can be established for any n, but only with an exponential bound. This is, however, sufficient to extend the classical logarithmic complexity result for minimizing a discretely convex function in one dimension to the case of integrally convex functions in two dimensions. Furthermore, we identified a new class of discrete convex functions, called directed integrally convex functions, which is strictly between the classes of L^natural -convex and integrally convex functions but enjoys the same scaling and proximity properties that hold for L^natural -convex functions.

Cite as

Satoko Moriguchi, Kazuo Murota, Akihisa Tamura, and Fabio Tardella. Scaling and Proximity Properties of Integrally Convex Functions. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 57:1-57:13, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{moriguchi_et_al:LIPIcs.ISAAC.2016.57,
  author =	{Moriguchi, Satoko and Murota, Kazuo and Tamura, Akihisa and Tardella, Fabio},
  title =	{{Scaling and Proximity Properties of Integrally Convex Functions}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{57:1--57:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.57},
  URN =		{urn:nbn:de:0030-drops-68368},
  doi =		{10.4230/LIPIcs.ISAAC.2016.57},
  annote =	{Keywords: Discrete optimization, discrete convexity, proximity theorem, scaling algorithm}
}
Document
Assigning Weights to Minimize the Covering Radius in the Plane

Authors: Eunjin Oh and Hee-Kap Ahn


Abstract
Given a set P of n points in the plane and a multiset W of k weights with k leq n, we assign a weight in W to a point in P to minimize the maximum weighted distance from the weighted center of P to any point in P. In this paper, we give two algorithms which take O(k^2 n^2 log^4 n) time and O(k^5 n log^4 k + kn log^3 n) time, respectively. For a constant k, the second algorithm takes only O(n log^3 n) time, which is near-linear.

Cite as

Eunjin Oh and Hee-Kap Ahn. Assigning Weights to Minimize the Covering Radius in the Plane. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 58:1-58:12, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{oh_et_al:LIPIcs.ISAAC.2016.58,
  author =	{Oh, Eunjin and Ahn, Hee-Kap},
  title =	{{Assigning Weights to Minimize the Covering Radius in the Plane}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{58:1--58:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.58},
  URN =		{urn:nbn:de:0030-drops-68275},
  doi =		{10.4230/LIPIcs.ISAAC.2016.58},
  annote =	{Keywords: Weighted center, facility location, weight assignment, combinatorial op- timization, computational geometry}
}
Document
A Near-Optimal Algorithm for Finding an Optimal Shortcut of a Tree

Authors: Eunjin Oh and Hee-Kap Ahn


Abstract
We consider the problem of finding a shortcut connecting two vertices of a graph that minimizes the diameter of the resulting graph. We present an O(n^2 log^3 n)-time algorithm using linear space for the case that the input graph is a tree consisting of n vertices. Additionally, we present an O(n^2 log^3 n)-time algorithm using linear space for a continuous version of this problem.

Cite as

Eunjin Oh and Hee-Kap Ahn. A Near-Optimal Algorithm for Finding an Optimal Shortcut of a Tree. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 59:1-59:12, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{oh_et_al:LIPIcs.ISAAC.2016.59,
  author =	{Oh, Eunjin and Ahn, Hee-Kap},
  title =	{{A Near-Optimal Algorithm for Finding an Optimal Shortcut of a Tree}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{59:1--59:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.59},
  URN =		{urn:nbn:de:0030-drops-68283},
  doi =		{10.4230/LIPIcs.ISAAC.2016.59},
  annote =	{Keywords: Network Augmentation, Shortcuts, Diameter, Trees}
}
Document
Approximate Shortest Distances Among Smooth Obstacles in 3D

Authors: Christian Scheffer and Jan Vahrenhold


Abstract
We consider the classic all-pairs-shortest-paths (APSP) problem in a three-dimensional environment where paths have to avoid a set of smooth obstacles whose surfaces are represented by discrete point sets with n sample points in total. We show that if the point sets represent epsilon-samples of the underlying surfaces, (1 ± O(sqrt{epsilon}))-approximations of the distances between all pairs of sample points can be computed in O(n^{5/2} log^2 n) time.

Cite as

Christian Scheffer and Jan Vahrenhold. Approximate Shortest Distances Among Smooth Obstacles in 3D. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 60:1-60:13, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{scheffer_et_al:LIPIcs.ISAAC.2016.60,
  author =	{Scheffer, Christian and Vahrenhold, Jan},
  title =	{{Approximate Shortest Distances Among Smooth Obstacles in 3D}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{60:1--60:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.60},
  URN =		{urn:nbn:de:0030-drops-68292},
  doi =		{10.4230/LIPIcs.ISAAC.2016.60},
  annote =	{Keywords: Geodesic distances; approximation algorithm; epsilon sample}
}
Document
An Improved Tax Scheme for Selfish Routing

Authors: Te-Li Wang, Chih-Kuan Yeh, and Ho-Lin Chen


Abstract
We study the problem of routing traffic for independent selfish users in a congested network to minimize the total latency. The inefficiency of selfish routing motivates regulating the flow of the system to lower the total latency of the Nash Equilibrium by economic incentives or penalties. When applying tax to the routes, we follow the definition of [Christodoulou et al, Algorithmica, 2014] to define ePoA as the Nash total cost including tax in the taxed network over the optimal cost in the original network. We propose a simple tax scheme consisting of step functions imposed on the links. The tax scheme can be applied to routing games with parallel links, affine cost functions and single-commodity networks to lower the ePoA to at most 4/3 - epsilon, where epsilon only depends on the discrepancy between the links. We show that there exists a tax scheme in the two link case with an ePoA upperbound less than 1.192 which is almost tight. Moreover, we design another tax scheme that lowers ePoA down to 1.281 for routing games with groups of links such that links in the same group are similar to each other and groups are sufficiently different.

Cite as

Te-Li Wang, Chih-Kuan Yeh, and Ho-Lin Chen. An Improved Tax Scheme for Selfish Routing. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 61:1-61:12, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{wang_et_al:LIPIcs.ISAAC.2016.61,
  author =	{Wang, Te-Li and Yeh, Chih-Kuan and Chen, Ho-Lin},
  title =	{{An Improved Tax Scheme for Selfish Routing}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{61:1--61:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.61},
  URN =		{urn:nbn:de:0030-drops-68308},
  doi =		{10.4230/LIPIcs.ISAAC.2016.61},
  annote =	{Keywords: selfish routing, price of anarchy, tax}
}
Document
A Linear-Time Algorithm for Integral Multiterminal Flows in Trees

Authors: Mingyu Xiao and Hiroshi Nagamochi


Abstract
In this paper, we study the problem of finding an integral multiflow which maximizes the sum of flow values between every two terminals in an undirected tree with a nonnegative integer edge capacity and a set of terminals. In general, it is known that the flow value of an integral multiflow is bounded by the cut value of a cut-system which consists of disjoint subsets each of which contains exactly one terminal or has an odd cut value, and there exists a pair of an integral multiflow and a cut-system whose flow value and cut value are equal; i.e., a pair of a maximum integral multiflow and a minimum cut. In this paper, we propose an O(n)-time algorithm that finds such a pair of an integral multiflow and a cut-system in a given tree instance with n vertices. This improves the best previous results by a factor of Omega(n). Regarding a given tree in an instance as a rooted tree, we define O(n) rooted tree instances taking each vertex as a root, and establish a recursive formula on maximum integral multiflow values of these instances to design a dynamic programming that computes the maximum integral multiflow values of all O(n) rooted instances in linear time. We can prove that the algorithm implicitly maintains a cut-system so that not only a maximum integral multiflow but also a minimum cut-system can be constructed in linear time for any rooted instance whenever it is necessary. The resulting algorithm is rather compact and succinct.

Cite as

Mingyu Xiao and Hiroshi Nagamochi. A Linear-Time Algorithm for Integral Multiterminal Flows in Trees. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 62:1-62:12, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{xiao_et_al:LIPIcs.ISAAC.2016.62,
  author =	{Xiao, Mingyu and Nagamochi, Hiroshi},
  title =	{{A Linear-Time Algorithm  for Integral Multiterminal Flows in Trees}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{62:1--62:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.62},
  URN =		{urn:nbn:de:0030-drops-68311},
  doi =		{10.4230/LIPIcs.ISAAC.2016.62},
  annote =	{Keywords: Multiterminal flow; Maximum flow; Minimum Cut; Trees; Linear-time algorithms}
}
Document
Shortest Disjoint S-Paths Via Weighted Linear Matroid Parity

Authors: Yutaro Yamaguchi


Abstract
Mader's disjoint S-paths problem unifies two generalizations of bipartite matching: (a) non-bipartite matching and (b) disjoint s–t paths. Lovász (1980, 1981) first proposed an efficient algorithm for this problem via a reduction to matroid matching, which also unifies two generalizations of bipartite matching: (a) non-bipartite matching and (c) matroid intersection. While the weighted versions of the problems (a)-(c) in which we aim to minimize the total weight of a designated-size feasible solution are known to be solvable in polynomial time, the tractability of such a weighted version of Mader's problem has been open for a long while. In this paper, we present the first solution to this problem with the aid of a linear representation for Lovász' reduction (which leads to a reduction to linear matroid parity) due to Schrijver (2003) and polynomial-time algorithms for a weighted version of linear matroid parity announced by Iwata (2013) and by Pap (2013). Specifically, we give a reduction of the weighted version of Mader's problem to weighted linear matroid parity, which leads to an O(n^5)-time algorithm for the former problem, where n denotes the number of vertices in the input graph. Our reduction technique is also applicable to a further generalized framework, packing non-zero A-paths in group-labeled graphs, introduced by Chudnovsky, Geelen, Gerards, Goddyn, Lohman, and Seymour (2006). The extension leads to the tractability of a broader class of weighted problems not restricted to Mader’s setting.

Cite as

Yutaro Yamaguchi. Shortest Disjoint S-Paths Via Weighted Linear Matroid Parity. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 63:1-63:13, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{yamaguchi:LIPIcs.ISAAC.2016.63,
  author =	{Yamaguchi, Yutaro},
  title =	{{Shortest Disjoint S-Paths Via Weighted Linear Matroid Parity}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{63:1--63:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.63},
  URN =		{urn:nbn:de:0030-drops-68325},
  doi =		{10.4230/LIPIcs.ISAAC.2016.63},
  annote =	{Keywords: Mader's S-paths, packing non-zero A-paths in group-labeled graphs, linear matroid parity, weighted problems, tractability}
}
Document
The (1|1)-Centroid Problem on the Plane Concerning Distance Constraints

Authors: Hung-I Yu, Tien-Ching Lin, and Der-Tsai Lee


Abstract
In 1982, Drezner proposed the (1|1)-centroid problem on the plane, in which two players, called the leader and the follower, open facilities to provide service to customers in a competitive manner. The leader opens the first facility, and then the follower opens the second. Each customer will patronize the facility closest to him (ties broken in favor of the leader's one), thereby decides the market share of the two players. The goal is to find the best position for the leader’s facility so that his market share is maximized. The best algorithm for this problem is an O(n^2 log n)-time parametric search approach, which searches over the space of possible market share values. In the same paper, Drezner also proposed a general version of (1|1)-centroid problem by introducing a minimal distance constraint R, such that the follower's facility is not allowed to be located within a distance R from the leader's. He proposed an O(n^5 log n)-time algorithm for this general version by identifying O(n^4) points as the candidates of the optimal solution and checking the market share for each of them. In this paper, we develop a new parametric search approach searching over the O(n^4) candidate points, and present an O(n^2 log n)-time algorithm for the general version, thereby closing the O(n^3) gap between the two bounds.

Cite as

Hung-I Yu, Tien-Ching Lin, and Der-Tsai Lee. The (1|1)-Centroid Problem on the Plane Concerning Distance Constraints. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 64:1-64:12, Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{yu_et_al:LIPIcs.ISAAC.2016.64,
  author =	{Yu, Hung-I and Lin, Tien-Ching and Lee, Der-Tsai},
  title =	{{The (1|1)-Centroid Problem on the Plane Concerning Distance Constraints}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{64:1--64:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.64},
  URN =		{urn:nbn:de:0030-drops-68337},
  doi =		{10.4230/LIPIcs.ISAAC.2016.64},
  annote =	{Keywords: competitive facility, Euclidean plane, parametric search}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail