eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
0
0
10.4230/LIPIcs.ISAAC.2019
article
LIPIcs, Volume 149, ISAAC'19, Complete Volume
Lu, Pinyan
1
Zhang, Guochuan
2
Shanghai University of Finance and Economics, China
Zhejiang University, China
LIPIcs, Volume 149, ISAAC'19, Complete Volume
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019/LIPIcs.ISAAC.2019.pdf
Theory of computation; Models of computation; Computational complexity and cryptography; Randomness, geometry and discrete structures; Theory and algorithms for application domains; Design and analysis of algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
0:i
0:xvi
10.4230/LIPIcs.ISAAC.2019.0
article
Front Matter, Table of Contents, Preface, Symposium Organization
Lu, Pinyan
1
Zhang, Guochuan
2
Shanghai University of Finance and Economics, China
Zhejiang University, China
Front Matter, Table of Contents, Preface, Symposium Organization
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.0/LIPIcs.ISAAC.2019.0.pdf
Front Matter
Table of Contents
Preface
Symposium Organization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
1:1
1:18
10.4230/LIPIcs.ISAAC.2019.1
article
Graph Searches and Their End Vertices
Cao, Yixin
1
https://orcid.org/0000-0002-6927-438X
Wang, Zhifeng
2
Rong, Guozhen
2
Wang, Jianxin
2
Department of Computing, Hong Kong Polytechnic University, Hong Kong, China
School of Computer Science and Engineering, Central South University, Changsha, China
Graph search, the process of visiting vertices in a graph in a specific order, has demonstrated magical powers in many important algorithms. But a systematic study was only initiated by Corneil et al. a decade ago, and only by then we started to realize how little we understand it. Even the apparently naïve question "which vertex can be the last visited by a graph search algorithm," known as the end vertex problem, turns out to be quite elusive. We give a full picture of all maximum cardinality searches on chordal graphs, which implies a polynomial-time algorithm for the end vertex problem of maximum cardinality search. It is complemented by a proof of NP-completeness of the same problem on weakly chordal graphs. We also show linear-time algorithms for deciding end vertices of breadth-first searches on interval graphs, and end vertices of lexicographic depth-first searches on chordal graphs. Finally, we present 2^n * n^O(1)-time algorithms for deciding the end vertices of breadth-first searches, depth-first searches, and maximum cardinality searches on general graphs.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.1/LIPIcs.ISAAC.2019.1.pdf
maximum cardinality search
(lexicographic) breadth-first search
(lexicographic) depth-first search
chordal graph
weighted clique graph
end vertex
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
2:1
2:9
10.4230/LIPIcs.ISAAC.2019.2
article
Lower Bound for Non-Adaptive Estimation of the Number of Defective Items
Bshouty, Nader H.
1
Department of Computer Science, Technion, Haifa, Israel
We prove that to estimate within a constant factor the number of defective items in a non-adaptive randomized group testing algorithm we need at least Omega~(log n) tests. This solves the open problem posed by Damaschke and Sheikh Muhammad in [Peter Damaschke and Azam Sheikh Muhammad, 2010; Peter Damaschke and Azam Sheikh Muhammad, 2010].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.2/LIPIcs.ISAAC.2019.2.pdf
Group Testing
Estimation
Defective Items
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
3:1
3:15
10.4230/LIPIcs.ISAAC.2019.3
article
A Polynomial-Delay Algorithm for Enumerating Connectors Under Various Connectivity Conditions
Haraguchi, Kazuya
1
Nagamochi, Hiroshi
2
Otaru University of Commerce, Midori 3-5-21, Otaru, Hokkaido 047-8501, Japan
Graduate School of Informatics, Kyoto University, Yoshida-Honmachi, Sakyo-ku, Kyoto 606-8501, Japan
We are given an instance (G,I,sigma) with a graph G=(V,E), a set I of items, and a function sigma:V -> 2^I. For a subset X of V, let G[X] denote the subgraph induced from G by X, and I_sigma(X) denote the common item set over X. A subset X of V such that G[X] is connected is called a connector if, for any vertex v in V\X, G[X cup {v}] is not connected or I_sigma(X cup {v}) is a proper subset of I_sigma(X).
In this paper, we present the first polynomial-delay algorithm for enumerating all connectors. For this, we first extend the problem of enumerating connectors to a general setting so that the connectivity condition on X in G can be specified in a more flexible way. We next design a new algorithm for enumerating all solutions in the general setting, which leads to a polynomial-delay algorithm for enumerating all connectors for several connectivity conditions on X in G, such as the biconnectivity of G[X] or the k-edge-connectivity among vertices in X in G.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.3/LIPIcs.ISAAC.2019.3.pdf
Graph with itemsets
Enumeration
Polynomial-delay algorithms
Connectors
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
4:1
4:18
10.4230/LIPIcs.ISAAC.2019.4
article
Top Tree Compression of Tries
Bille, Philip
1
https://orcid.org/0000-0002-1120-5154
Gawrychowski, Paweł
2
https://orcid.org/0000-0002-6993-5440
Gørtz, Inge Li
1
https://orcid.org/0000-0002-8322-4952
Landau, Gad M.
3
https://orcid.org/0000-0002-5684-0629
Weimann, Oren
3
https://orcid.org/0000-0002-4510-7552
Technical University of Denmark, DTU Compute, Denmark
University of Wrocław, Poland
University of Haifa, Israel
We present a compressed representation of tries based on top tree compression [ICALP 2013] that works on a standard, comparison-based, pointer machine model of computation and supports efficient prefix search queries. Namely, we show how to preprocess a set of strings of total length n over an alphabet of size sigma into a compressed data structure of worst-case optimal size O(n/log_sigma n) that given a pattern string P of length m determines if P is a prefix of one of the strings in time O(min(m log sigma,m + log n)). We show that this query time is in fact optimal regardless of the size of the data structure.
Existing solutions either use Omega(n) space or rely on word RAM techniques, such as tabulation, hashing, address arithmetic, or word-level parallelism, and hence do not work on a pointer machine. Our result is the first solution on a pointer machine that achieves worst-case o(n) space. Along the way, we develop several interesting data structures that work on a pointer machine and are of independent interest. These include an optimal data structures for random access to a grammar-compressed string and an optimal data structure for a variant of the level ancestor problem.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.4/LIPIcs.ISAAC.2019.4.pdf
pattern matching
tree compression
top trees
pointer machine
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
5:1
5:21
10.4230/LIPIcs.ISAAC.2019.5
article
Two Phase Transitions in Two-Way Bootstrap Percolation
Zehmakan, Ahad N.
1
ETH Zurich, Switzerland
Consider a graph G and an initial random configuration, where each node is black with probability p and white otherwise, independently. In discrete-time rounds, each node becomes black if it has at least r black neighbors and white otherwise. We prove that this basic process exhibits a threshold behavior with two phase transitions when the underlying graph is a d-dimensional torus and identify the threshold values.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.5/LIPIcs.ISAAC.2019.5.pdf
bootstrap percolation
cellular automata
phase transition
d-dimensional torus
r-threshold model
biased majority
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
6:1
6:13
10.4230/LIPIcs.ISAAC.2019.6
article
Sliding Window Property Testing for Regular Languages
Ganardi, Moses
1
Hucke, Danny
1
Lohrey, Markus
1
Starikovskaya, Tatiana
2
Universität Siegen, Germany
DI/ENS, PSL Research University, Paris, France
We study the problem of recognizing regular languages in a variant of the streaming model of computation, called the sliding window model. In this model, we are given a size of the sliding window n and a stream of symbols. At each time instant, we must decide whether the suffix of length n of the current stream ("the active window") belongs to a given regular language.
Recent works [Moses Ganardi et al., 2018; Moses Ganardi et al., 2016] showed that the space complexity of an optimal deterministic sliding window algorithm for this problem is either constant, logarithmic or linear in the window size n and provided natural language theoretic characterizations of the space complexity classes. Subsequently, [Moses Ganardi et al., 2018] extended this result to randomized algorithms to show that any such algorithm admits either constant, double logarithmic, logarithmic or linear space complexity.
In this work, we make an important step forward and combine the sliding window model with the property testing setting, which results in ultra-efficient algorithms for all regular languages. Informally, a sliding window property tester must accept the active window if it belongs to the language and reject it if it is far from the language. We show that for every regular language, there is a deterministic sliding window property tester that uses logarithmic space and a randomized sliding window property tester with two-sided error that uses constant space.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.6/LIPIcs.ISAAC.2019.6.pdf
Streaming algorithms
approximation algorithms
regular languages
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
7:1
7:22
10.4230/LIPIcs.ISAAC.2019.7
article
On the Hardness of Set Disjointness and Set Intersection with Bounded Universe
Goldstein, Isaac
1
Lewenstein, Moshe
1
Porat, Ely
1
Bar-Ilan University, Ramat Gan, Israel
In the SetDisjointness problem, a collection of m sets S_1,S_2,...,S_m from some universe U is preprocessed in order to answer queries on the emptiness of the intersection of some two query sets from the collection. In the SetIntersection variant, all the elements in the intersection of the query sets are required to be reported. These are two fundamental problems that were considered in several papers from both the upper bound and lower bound perspective.
Several conditional lower bounds for these problems were proven for the tradeoff between preprocessing and query time or the tradeoff between space and query time. Moreover, there are several unconditional hardness results for these problems in some specific computational models. The fundamental nature of the SetDisjointness and SetIntersection problems makes them useful for proving the conditional hardness of other problems from various areas. However, the universe of the elements in the sets may be very large, which may cause the reduction to some other problems to be inefficient and therefore it is not useful for proving their conditional hardness.
In this paper, we prove the conditional hardness of SetDisjointness and SetIntersection with bounded universe. This conditional hardness is shown for both the interplay between preprocessing and query time and the interplay between space and query time. Moreover, we present several applications of these new conditional lower bounds. These applications demonstrates the strength of our new conditional lower bounds as they exploit the limited universe size. We believe that this new framework of conditional lower bounds with bounded universe can be useful for further significant applications.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.7/LIPIcs.ISAAC.2019.7.pdf
set disjointness
set intersection
3SUM
space-time tradeoff
conditional lower bounds
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
8:1
8:19
10.4230/LIPIcs.ISAAC.2019.8
article
Gathering and Election by Mobile Robots in a Continuous Cycle
Flocchini, Paola
1
Killick, Ryan
2
Kranakis, Evangelos
2
Santoro, Nicola
2
Yamashita, Masafumi
3
School of Electrical Eng. and Comp. Sci., University of Ottawa, Ottawa, ON, K1N 6N5, Canada
School of Computer Science, Carleton University, Ottawa, ON, K1S 5B6, Canada
Dept. of Comp. Sci. and Comm. Eng., Kyushu University, Motooka, Fukuoka, 819-0395, Japan
Consider a set of n mobile computational entities, called robots, located and operating on a continuous cycle C (e.g., the perimeter of a closed region of R^2) of arbitrary length l. The robots are identical, can only see their current location, have no location awareness, and cannot communicate at a distance. In this weak setting, we study the classical problems of gathering (GATHER), requiring all robots to meet at a same location; and election (ELECT), requiring all robots to agree on a single one as the "leader". We investigate how to solve the problems depending on the amount of knowledge (exact, upper bound, none) the robots have about their number n and about the length of the cycle l. Cost of the algorithms is analyzed with respect to time and number of random bits. We establish a variety of new results specific to the continuous cycle - a geometric domain never explored before for GATHER and ELECT in a mobile robot setting; compare Monte Carlo and Las Vegas algorithms; and obtain several optimal bounds.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.8/LIPIcs.ISAAC.2019.8.pdf
Cycle
Election
Gathering
Las Vegas
Monte Carlo
Randomized Algorithm
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
9:1
9:14
10.4230/LIPIcs.ISAAC.2019.9
article
Strategy-Proof Approximation Algorithms for the Stable Marriage Problem with Ties and Incomplete Lists
Hamada, Koki
1
2
https://orcid.org/0000-0002-8863-6809
Miyazaki, Shuichi
3
https://orcid.org/0000-0003-0369-1970
Yanagisawa, Hiroki
4
https://orcid.org/0000-0002-3421-5240
NTT Corporation, 3-9-11, Midori-cho, Musashino-shi, Tokyo 180-8585, Japan
Graduate School of Informatics, Kyoto University, Yoshida-Honmachi, Sakyo-ku Kyoto 606-8501, Japan
Academic Center for Computing and Media Studies, Kyoto University, Yoshida-Honmachi, Sakyo-ku, Kyoto 606-8501, Japan
IBM Research - Tokyo, 19-21, Hakozaki-cho, Nihombashi, Chuoh-ku, Tokyo 103-8510, Japan
In the stable marriage problem (SM), a mechanism that always outputs a stable matching is called a stable mechanism. One of the well-known stable mechanisms is the man-oriented Gale-Shapley algorithm (MGS). MGS has a good property that it is strategy-proof to the men’s side, i.e., no man can obtain a better outcome by falsifying a preference list. We call such a mechanism a man-strategy-proof mechanism. Unfortunately, MGS is not a woman-strategy-proof mechanism. (Of course, if we flip the roles of men and women, we can see that the woman-oriented Gale-Shapley algorithm (WGS) is a woman-strategy-proof but not a man-strategy-proof mechanism.) Roth has shown that there is no stable mechanism that is simultaneously man-strategy-proof and woman-strategy-proof, which is known as Roth’s impossibility theorem.
In this paper, we extend these results to the stable marriage problem with ties and incomplete lists (SMTI). Since SMTI is an extension of SM, Roth’s impossibility theorem takes over to SMTI. Therefore, we focus on the one-sided-strategy-proofness. In SMTI, one instance can have stable matchings of different sizes, and it is natural to consider the problem of finding a largest stable matching, known as MAX SMTI. Thus we incorporate the notion of approximation ratios used in the theory of approximation algorithms. We say that a stable-mechanism is a c-approximate-stable mechanism if it always returns a stable matching of size at least 1/c of a largest one. We also consider a restricted variant of MAX SMTI, which we call MAX SMTI-1TM, where only men’s lists can contain ties (and women’s lists must be strictly ordered).
Our results are summarized as follows: (i) MAX SMTI admits both a man-strategy-proof 2-approximate-stable mechanism and a woman-strategy-proof 2-approximate-stable mechanism. (ii) MAX SMTI-1TM admits a woman-strategy-proof 2-approximate-stable mechanism. (iii) MAX SMTI-1TM admits a man-strategy-proof 1.5-approximate-stable mechanism. All these results are tight in terms of approximation ratios. Also, all these results apply for strategy-proofness against coalitions.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.9/LIPIcs.ISAAC.2019.9.pdf
Stable marriage problem
strategy-proofness
approximation algorithm
ties
incomplete lists
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
10:1
10:15
10.4230/LIPIcs.ISAAC.2019.10
article
Online Multidimensional Packing Problems in the Random-Order Model
Naori, David
1
Raz, Danny
1
Computer Science Department, Technion, 32000 Haifa, Israel
We study online multidimensional variants of the generalized assignment problem which are used to model prominent real-world applications, such as the assignment of virtual machines with multiple resource requirements to physical infrastructure in cloud computing. These problems can be seen as an extension of the well known secretary problem and thus the standard online worst-case model cannot provide any performance guarantee. The prevailing model in this case is the random-order model, which provides a useful realistic and robust alternative. Using this model, we study the d-dimensional generalized assignment problem, where we introduce a novel technique that achieves an O(d)-competitive algorithms and prove a matching lower bound of Omega(d). Furthermore, our algorithm improves upon the best-known competitive-ratio for the online (one-dimensional) generalized assignment problem and the online knapsack problem.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.10/LIPIcs.ISAAC.2019.10.pdf
Random Order
Generalized Assignment Problem
Knapsack Problem
Multidimensional Packing
Secretary Problem
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
11:1
11:17
10.4230/LIPIcs.ISAAC.2019.11
article
Approximate Euclidean Shortest Paths in Polygonal Domains
Inkulu, R.
1
Kapoor, Sanjiv
2
Department of Computer Science & Engineering, IIT Guwahati, India
Department of Computer Science & Engineering, IIT Chicago, USA
Given a set P of h pairwise disjoint simple polygonal obstacles in R^2 defined with n vertices, we compute a sketch Omega of P whose size is independent of n, depending only on h and the input parameter epsilon. We utilize Omega to compute a (1+epsilon)-approximate geodesic shortest path between the two given points in O(n + h((lg n) + (lg h)^(1+delta) + (1/epsilon) lg(h/epsilon)))) time. Here, epsilon is a user parameter, and delta is a small positive constant (resulting from the time for triangulating the free space of P using the algorithm in [Bar-Yehuda and Chazelle, 1994]). Moreover, we devise a (2+epsilon)-approximation algorithm to answer two-point Euclidean distance queries for the case of convex polygonal obstacles.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.11/LIPIcs.ISAAC.2019.11.pdf
Computational Geometry
Geometric Shortest Paths
Approximation Algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
12:1
12:14
10.4230/LIPIcs.ISAAC.2019.12
article
Reachability in High Treewidth Graphs
Jain, Rahul
1
https://orcid.org/0000-0002-8567-9475
Tewari, Raghunath
1
Indian Institute of Technology Kanpur, India
Reachability is the problem of deciding whether there is a path from one vertex to the other in the graph. Standard graph traversal algorithms such as DFS and BFS take linear time to decide reachability; however, their space complexity is also linear. On the other hand, Savitch’s algorithm takes quasipolynomial time although the space bound is O(log^2 n). Here, we study space efficient algorithms for deciding reachability that run in polynomial time.
In this paper, we show that given an n vertex directed graph of treewidth w along with its tree decomposition, there exists an algorithm running in polynomial time and O(w log n) space that solves the reachability problem.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.12/LIPIcs.ISAAC.2019.12.pdf
graph reachability
simultaneous time-space upper bound
tree decomposition
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
13:1
13:15
10.4230/LIPIcs.ISAAC.2019.13
article
Approximate Pricing in Networks: How to Boost the Betweenness and Revenue of a Node
Brokkelkamp, Ruben
1
https://orcid.org/0000-0003-1223-4616
Polak, Sven
2
https://orcid.org/0000-0002-4287-6479
Schäfer, Guido
1
3
Velaj, Yllka
4
Centrum Wiskunde & Informatica (CWI), Amsterdam, Netherlands
Korteweg-de Vries Institute for Mathematics, University of Amsterdam, Netherlands
Vrije Universiteit Amsterdam, Netherlands
ISI Foundation, Turin, Italy
We introduce and study two new pricing problems in networks: Suppose we are given a directed graph G = (V, E) with non-negative edge costs (c_e)_{e in E}, k commodities (s_i, t_i, w_i)_{i in [k]} and a designated node u in V. Each commodity i in [k] is represented by a source-target pair (s_i, t_i) in V x V and a demand w_i>0, specifying that w_i units of flow are sent from s_i to t_i along shortest s_i, t_i-paths (with respect to (c_e)_{e in E}). The demand of each commodity is split evenly over all shortest paths. Assume we can change the edge costs of some of the outgoing edges of u, while the costs of all other edges remain fixed; we also say that we price (or tax) the edges of u.
We study the problem of pricing the edges of u with respect to the following two natural objectives: (i) max-flow: maximize the total flow passing through u, and (ii) max-revenue: maximize the total revenue (flow times tax) through u. Both variants have various applications in practice. For example, the max flow objective is equivalent to maximizing the betweenness centrality of u, which is one of the most popular measures for the influence of a node in a (social) network. We prove that (except for some special cases) both problems are NP-hard and inapproximable in general and therefore resort to approximation algorithms. We derive approximation algorithms for both variants and show that the derived approximation guarantees are best possible.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.13/LIPIcs.ISAAC.2019.13.pdf
Network pricing
Stackelberg network pricing
betweenness centrality
revenue maximization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
14:1
14:14
10.4230/LIPIcs.ISAAC.2019.14
article
Slaying Hydrae: Improved Bounds for Generalized k-Server in Uniform Metrics
Bienkowski, Marcin
1
https://orcid.org/0000-0002-2453-7772
Jeż, Łukasz
1
https://orcid.org/0000-0002-7375-0641
Schmidt, Paweł
1
Institute of Computer Science, University of Wrocław, Poland
The generalized k-server problem is an extension of the weighted k-server problem, which in turn extends the classic k-server problem. In the generalized k-server problem, each of k servers s_1, ..., s_k remains in its own metric space M_i. A request is a tuple (r_1,...,r_k), where r_i in M_i, and to service it, an algorithm needs to move at least one server s_i to the point r_i. The objective is to minimize the total distance traveled by all servers.
In this paper, we focus on the generalized k-server problem for the case where all M_i are uniform metrics. We show an O(k^2 * log k)-competitive randomized algorithm improving over a recent result by Bansal et al. [SODA 2018], who gave an O(k^3 * log k)-competitive algorithm. To this end, we define an abstract online problem, called Hydra game, and we show that a randomized solution of low cost to this game implies a randomized algorithm to the generalized k-server problem with low competitive ratio.
We also show that no randomized algorithm can achieve competitive ratio lower than Omega(k), thus improving the lower bound of Omega(k / log^2 k) by Bansal et al.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.14/LIPIcs.ISAAC.2019.14.pdf
k-server
generalized k-server
competitive analysis
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
15:1
15:19
10.4230/LIPIcs.ISAAC.2019.15
article
Measure and Conquer for Max Hamming Distance XSAT
Hoi, Gordon
1
Stephan, Frank
2
1
School of Computing, National University of Singapore, 13 Computing Drive, Block COM1, Singapore 117417, Republic of Singapore
Department of Mathematics, National University of Singapore, 10 Lower Kent Ridge Road, Block S17, Singapore 119076, Republic of Singapore
XSAT is defined as the following: Given a propositional formula in conjunctive normal form, can one find an assignment to variables such that there is exactly only 1 literal that is true in every clause, while the other literals are false. The decision problem XSAT is known to be NP-complete. Crescenzi and Rossi [Pierluigi Crescenzi and Gianluca Rossi, 2002] introduced the variant where one searches for a pair of two solutions of an X3SAT instance with maximal Hamming Distance among them, that is, one wants to identify the largest number k such that there are two solutions of the instance with Hamming Distance k. Dahllöf [Vilhelm Dahllöf, 2005; Vilhelm Dahllöf, 2006] provided an algorithm using branch and bound method for Max Hamming Distance XSAT in O(1.8348^n); Fu, Zhou and Yin [Linlu Fu and Minghao Yin, 2012] worked on a more specific problem, the Max Hamming Distance X3SAT, and found for this problem an algorithm with runtime O(1.6760^n). In this paper, we propose an exact exponential algorithm to solve the Max Hamming Distance XSAT problem in O(1.4983^n) time. Like all of them, we will use the branch and bound technique alongside a newly defined measure to improve the analysis of the algorithm.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.15/LIPIcs.ISAAC.2019.15.pdf
XSAT
Measure and Conquer
DPLL
Exponential Time Algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
16:1
16:13
10.4230/LIPIcs.ISAAC.2019.16
article
Cyclability in Graph Classes
Crespelle, Christophe
1
Feghali, Carl
1
Golovach, Petr A.
1
Department of Informatics, University of Bergen, Norway
A subset T subseteq V(G) of vertices of a graph G is said to be cyclable if G has a cycle C containing every vertex of T, and for a positive integer k, a graph G is k-cyclable if every subset of vertices of G of size at most k is cyclable. The Terminal Cyclability problem asks, given a graph G and a set T of vertices, whether T is cyclable, and the k-Cyclability problem asks, given a graph G and a positive integer k, whether G is k-cyclable. These problems are generalizations of the classical Hamiltonian Cycle problem. We initiate the study of these problems for graph classes that admit polynomial algorithms for Hamiltonian Cycle. We show that Terminal Cyclability can be solved in linear time for interval graphs, bipartite permutation graphs and cographs. Moreover, we construct certifying algorithms that either produce a solution, that is, a cycle, or output a graph separator that certifies a no-answer. We use these results to show that k-Cyclability can be solved in polynomial time when restricted to the aforementioned graph classes.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.16/LIPIcs.ISAAC.2019.16.pdf
Cyclability
interval graphs
bipartite permutation graphs
cographs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
17:1
17:12
10.4230/LIPIcs.ISAAC.2019.17
article
Complexity of Linear Operators
Kulikov, Alexander S.
1
https://orcid.org/0000-0002-5656-0336
Mikhailin, Ivan
2
Mokhov, Andrey
3
Podolskii, Vladimir
4
https://orcid.org/0000-0001-7154-138X
Steklov Mathematical Institute at St. Petersburg, Russian Academy of Sciences, St. Petersburg State University, Russia
University of California, San Diego, CA, USA
School of Engineering, Newcastle University, UK
Steklov Mathematical Institute, Russian Academy of Sciences, Moscow, Russia
Let A in {0,1}^{n x n} be a matrix with z zeroes and u ones and x be an n-dimensional vector of formal variables over a semigroup (S, o). How many semigroup operations are required to compute the linear operator Ax?
As we observe in this paper, this problem contains as a special case the well-known range queries problem and has a rich variety of applications in such areas as graph algorithms, functional programming, circuit complexity, and others. It is easy to compute Ax using O(u) semigroup operations. The main question studied in this paper is: can Ax be computed using O(z) semigroup operations? We prove that in general this is not possible: there exists a matrix A in {0,1}^{n x n} with exactly two zeroes in every row (hence z=2n) whose complexity is Theta(n alpha(n)) where alpha(n) is the inverse Ackermann function. However, for the case when the semigroup is commutative, we give a constructive proof of an O(z) upper bound. This implies that in commutative settings, complements of sparse matrices can be processed as efficiently as sparse matrices (though the corresponding algorithms are more involved). Note that this covers the cases of Boolean and tropical semirings that have numerous applications, e.g., in graph theory.
As a simple application of the presented linear-size construction, we show how to multiply two n x n matrices over an arbitrary semiring in O(n^2) time if one of these matrices is a 0/1-matrix with O(n) zeroes (i.e., a complement of a sparse matrix).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.17/LIPIcs.ISAAC.2019.17.pdf
algorithms
linear operators
commutativity
range queries
circuit complexity
lower bounds
upper bounds
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
18:1
18:19
10.4230/LIPIcs.ISAAC.2019.18
article
New Results for the k-Secretary Problem
Albers, Susanne
1
Ladewig, Leon
1
Department of Informatics, Technical University of Munich, Germany
Suppose that n numbers arrive online in random order and the goal is to select k of them such that the expected sum of the selected items is maximized. The decision for any item is irrevocable and must be made on arrival without knowing future items. This problem is known as the k-secretary problem, which includes the classical secretary problem with the special case k=1. It is well-known that the latter problem can be solved by a simple algorithm of competitive ratio 1/e which is asymptotically optimal. When k is small, only for k=2 does there exist an algorithm beating the threshold of 1/e [Chan et al. SODA 2015]. The algorithm relies on an involved selection policy. Moreover, there exist results when k is large [Kleinberg SODA 2005].
In this paper we present results for the k-secretary problem, considering the interesting and relevant case that k is small. We focus on simple selection algorithms, accompanied by combinatorial analyses. As a main contribution we propose a natural deterministic algorithm designed to have competitive ratios strictly greater than 1/e for small k >= 2. This algorithm is hardly more complex than the elegant strategy for the classical secretary problem, optimal for k=1, and works for all k >= 1. We explicitly compute its competitive ratios for 2 <= k <= 100, ranging from 0.41 for k=2 to 0.75 for k=100. Moreover, we show that an algorithm proposed by Babaioff et al. [APPROX 2007] has a competitive ratio of 0.4168 for k=2, implying that the previous analysis was not tight. Our analysis reveals a surprising combinatorial property of this algorithm, which might be helpful for a tight analysis of this algorithm for general k.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.18/LIPIcs.ISAAC.2019.18.pdf
Online algorithms
secretary problem
random order model
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
19:1
19:17
10.4230/LIPIcs.ISAAC.2019.19
article
Triangle Estimation Using Tripartite Independent Set Queries
Bhattacharya, Anup
1
Bishnu, Arijit
1
Ghosh, Arijit
1
Mishra, Gopinath
1
Indian Statistical Institute, Kolkata, India
Estimating the number of triangles in a graph is one of the most fundamental problems in sublinear algorithms. In this work, we provide an approximate triangle counting algorithm using only polylogarithmic queries when the number of triangles on any edge in the graph is polylogarithmically bounded. Our query oracle Tripartite Independent Set (TIS) takes three disjoint sets of vertices A, B and C as input, and answers whether there exists a triangle having one endpoint in each of these three sets. Our query model generally belongs to the class of group queries (Ron and Tsur, ACM ToCT, 2016; Dell and Lapinskas, STOC 2018) and in particular is inspired by the Bipartite Independent Set (BIS) query oracle of Beame et al. (ITCS 2018). We extend the algorithmic framework of Beame et al., with TIS replacing BIS, for triangle counting using ideas from color coding due to Alon et al. (J. ACM, 1995) and a concentration inequality for sums of random variables with bounded dependency (Janson, Rand. Struct. Alg., 2004).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.19/LIPIcs.ISAAC.2019.19.pdf
Triangle estimation
query complexity
sublinear algorithm
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
20:1
20:23
10.4230/LIPIcs.ISAAC.2019.20
article
Step-By-Step Community Detection in Volume-Regular Graphs
Becchetti, Luca
1
https://orcid.org/0000-0002-4941-0532
Cruciani, Emilio
2
https://orcid.org/0000-0002-4744-5635
Pasquale, Francesco
3
https://orcid.org/0000-0003-1595-5291
Rizzo, Sara
2
https://orcid.org/0000-0002-5551-8216
Sapienza Università di Roma, Italy
Gran Sasso Science Institute, L'Aquila, Italy
Università di Roma "Tor Vergata", Italy
Spectral techniques have proved amongst the most effective approaches to graph clustering. However, in general they require explicit computation of the main eigenvectors of a suitable matrix (usually the Laplacian matrix of the graph).
Recent work (e.g., Becchetti et al., SODA 2017) suggests that observing the temporal evolution of the power method applied to an initial random vector may, at least in some cases, provide enough information on the space spanned by the first two eigenvectors, so as to allow recovery of a hidden partition without explicit eigenvector computations. While the results of Becchetti et al. apply to perfectly balanced partitions and/or graphs that exhibit very strong forms of regularity, we extend their approach to graphs containing a hidden k partition and characterized by a milder form of volume-regularity. We show that the class of k-volume regular graphs is the largest class of undirected (possibly weighted) graphs whose transition matrix admits k "stepwise" eigenvectors (i.e., vectors that are constant over each set of the hidden partition). To obtain this result, we highlight a connection between volume regularity and lumpability of Markov chains. Moreover, we prove that if the stepwise eigenvectors are those associated to the first k eigenvalues and the gap between the k-th and the (k+1)-th eigenvalues is sufficiently large, the Averaging dynamics of Becchetti et al. recovers the underlying community structure of the graph in logarithmic time, with high probability.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.20/LIPIcs.ISAAC.2019.20.pdf
Community detection
Distributed algorithms
Dynamics
Markov chains
Spectral analysis
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
21:1
21:14
10.4230/LIPIcs.ISAAC.2019.21
article
Blocking Dominating Sets for H-Free Graphs via Edge Contractions
Galby, Esther
1
Lima, Paloma T.
2
Ries, Bernard
1
Department of Informatics, University of Fribourg, Fribourg, Switzerland
Department of Informatics, University of Bergen, Bergen, Norway
In this paper, we consider the following problem: given a connected graph G, can we reduce the domination number of G by one by using only one edge contraction? We show that the problem is NP-hard when restricted to {P_6,P_4+P_2}-free graphs and that it is coNP-hard when restricted to subcubic claw-free graphs and 2P_3-free graphs. As a consequence, we are able to establish a complexity dichotomy for the problem on H-free graphs when H is connected.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.21/LIPIcs.ISAAC.2019.21.pdf
domination number
blocker problem
H-free graphs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
22:1
22:17
10.4230/LIPIcs.ISAAC.2019.22
article
Internal Dictionary Matching
Charalampopoulos, Panagiotis
1
2
https://orcid.org/0000-0002-6024-1557
Kociumaka, Tomasz
2
3
https://orcid.org/0000-0002-2477-1702
Mohamed, Manal
1
https://orcid.org/0000-0002-1435-5051
Radoszewski, Jakub
2
4
https://orcid.org/0000-0002-0067-6401
Rytter, Wojciech
2
https://orcid.org/0000-0002-9162-6724
Waleń, Tomasz
2
https://orcid.org/0000-0002-7369-3309
Department of Informatics, King’s College London, London, UK
Institute of Informatics, University of Warsaw, Warsaw, Poland
Department of Computer Science, Bar-Ilan University, Ramat Gan, Israel
Samsung R&D Institute, Warsaw, Poland
We introduce data structures answering queries concerning the occurrences of patterns from a given dictionary D in fragments of a given string T of length n. The dictionary is internal in the sense that each pattern in D is given as a fragment of T. This way, D takes space proportional to the number of patterns d=|D| rather than their total length, which could be Theta(n * d).
In particular, we consider the following types of queries: reporting and counting all occurrences of patterns from D in a fragment T[i..j] (operations Report(i,j) and Count(i,j) below, as well as operation Exists(i,j) that returns true iff Count(i,j)>0) and reporting distinct patterns from D that occur in T[i..j] (operation ReportDistinct(i,j)). We show how to construct, in O((n+d) log^{O(1)} n) time, a data structure that answers each of these queries in time O(log^{O(1)} n+|output|) - see the table below for specific time and space complexities.
Query | Preprocessing time | Space | Query time
Exists(i,j) | O(n+d) | O(n) | O(1)
Report(i,j) | O(n+d) | O(n+d) | O(1+|output|)
ReportDistinct(i,j) | O(n log n+d) | O(n+d) | O(log n+|output|)
Count(i,j) | O({n log n}/{log log n} + d log^{3/2} n) | O(n+d log n) | O({log^2n}/{log log n})
The case of counting patterns is much more involved and needs a combination of a locally consistent parsing with orthogonal range searching. Reporting distinct patterns, on the other hand, uses the structure of maximal repetitions in strings. Finally, we provide tight - up to subpolynomial factors - upper and lower bounds for the case of a dynamic dictionary.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.22/LIPIcs.ISAAC.2019.22.pdf
string algorithms
dictionary matching
internal pattern matching
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
23:1
23:16
10.4230/LIPIcs.ISAAC.2019.23
article
Approximating the Geometric Edit Distance
Fox, Kyle
1
Li, Xinyi
1
The University of Texas at Dallas, USA
Edit distance is a measurement of similarity between two sequences such as strings, point sequences, or polygonal curves. Many matching problems from a variety of areas, such as signal analysis, bioinformatics, etc., need to be solved in a geometric space. Therefore, the geometric edit distance (GED) has been studied. In this paper, we describe the first strictly sublinear approximate near-linear time algorithm for computing the GED of two point sequences in constant dimensional Euclidean space. Specifically, we present a randomized O(n log^2n) time O(sqrt n)-approximation algorithm. Then, we generalize our result to give a randomized alpha-approximation algorithm for any alpha in [1, sqrt n], running in time O~(n^2/alpha^2). Both algorithms are Monte Carlo and return approximately optimal solutions with high probability.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.23/LIPIcs.ISAAC.2019.23.pdf
Geometric edit distance
Approximation
Randomized algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
24:1
24:19
10.4230/LIPIcs.ISAAC.2019.24
article
On Adaptivity Gaps of Influence Maximization Under the Independent Cascade Model with Full-Adoption Feedback
Chen, Wei
1
Peng, Binghui
2
Microsoft Research, Beijing, China
Columbia University, New York, United States
In this paper, we study the adaptivity gap of the influence maximization problem under the independent cascade model when full-adoption feedback is available. Our main results are to derive upper bounds on several families of well-studied influence graphs, including in-arborescences, out-arborescences and bipartite graphs. Especially, we prove that the adaptivity gap for the in-arborescences is between [e/(e-1), 2e/(e-1)], and for the out-arborescences the gap is between [e/(e-1), 2]. These are the first constant upper bounds in the full-adoption feedback model. Our analysis provides several novel ideas to tackle the correlated feedback appearing in adaptive stochastic optimization, which may be of independent interest.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.24/LIPIcs.ISAAC.2019.24.pdf
Adaptive influence maximization
adaptivity gap
full-adoption feedback
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
25:1
25:14
10.4230/LIPIcs.ISAAC.2019.25
article
Minimum-Width Double-Strip and Parallelogram Annulus
Bae, Sang Won
1
https://orcid.org/0000-0002-8802-4247
Division of Computer Science and Engineering, Kyonggi University, Suwon, Korea
In this paper, we study the problem of computing a minimum-width double-strip or parallelogram annulus that encloses a given set of n points in the plane. A double-strip is a closed region in the plane whose boundary consists of four parallel lines and a parallelogram annulus is a closed region between two edge-parallel parallelograms. We present several first algorithms for these problems. Among them are O(n^2) and O(n^3 log n)-time algorithms that compute a minimum-width double-strip and parallelogram annulus, respectively, when their orientations can be freely chosen.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.25/LIPIcs.ISAAC.2019.25.pdf
geometric covering
parallelogram annulus
two-line center
double-strip
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
26:1
26:17
10.4230/LIPIcs.ISAAC.2019.26
article
Small Candidate Set for Translational Pattern Search
Huang, Ziyun
1
Feng, Qilong
2
Wang, Jianxin
2
Xu, Jinhui
3
Department of Computer Science and Software Engineering, Penn State Erie, The Behrend College, Erie, PA, USA
School of Computer Science and Engineering, Central South University, P.R. China
Department of Computer Science and Engineering, State University of New York at Buffalo, USA
In this paper, we study the following pattern search problem: Given a pair of point sets A and B in fixed dimensional space R^d, with |B| = n, |A| = m and n >= m, the pattern search problem is to find the translations T’s of A such that each of the identified translations induces a matching between T(A) and a subset B' of B with cost no more than some given threshold, where the cost is defined as the minimum bipartite matching cost of T(A) and B'. We present a novel algorithm to produce a small set of candidate translations for the pattern search problem. For any B' subseteq B with |B'| = |A|, there exists at least one translation T in the candidate set such that the minimum bipartite matching cost between T(A) and B' is no larger than (1+epsilon) times the minimum bipartite matching cost between A and B' under any translation (i.e., the optimal translational matching cost). We also show that there exists an alternative solution to this problem, which constructs a candidate set of size O(n log^2 n) in O(n log^2 n) time with high probability of success. As a by-product of our construction, we obtain a weak epsilon-net for hypercube ranges, which significantly improves the construction time and the size of the candidate set. Our technique can be applied to a number of applications, including the translational pattern matching problem.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.26/LIPIcs.ISAAC.2019.26.pdf
Bipartite matching
Alignment
Discretization
Approximate algorithm
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
27:1
27:11
10.4230/LIPIcs.ISAAC.2019.27
article
The Weighted k-Center Problem in Trees for Fixed k
Bhattacharya, Binay
1
Das, Sandip
2
Dev, Subhadeep Ranjan
2
Simon Fraser University, Burnaby, Canada
Indian Statistical Institute, Kolkata, India
We present a linear time algorithm for the weighted k-center problem on trees for fixed k. This partially settles the long-standing question about the lower bound on the time complexity of the problem. The current time complexity of the best-known algorithm for the problem with k as part of the input is O(n log n) by Wang et al. [Haitao Wang and Jingru Zhang, 2018]. Whether an O(n) time algorithm exists for arbitrary k is still open.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.27/LIPIcs.ISAAC.2019.27.pdf
facility location
prune and search
parametric search
k-center problem
conditional k-center problem
trees
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
28:1
28:14
10.4230/LIPIcs.ISAAC.2019.28
article
Online Knapsack Problems with a Resource Buffer
Han, Xin
1
Kawase, Yasushi
2
Makino, Kazuhisa
3
Yokomaku, Haruki
4
Dalian University of Technology, Dalian, China
Tokyo Institute of Technology, Tokyo, Japan
Kyoto University, Kyoto, Japan
NTT DATA Mathematical Systems, Tokyo, Japan
In this paper, we introduce online knapsack problems with a resource buffer. In the problems, we are given a knapsack with capacity 1, a buffer with capacity R >= 1, and items that arrive one by one. Each arriving item has to be taken into the buffer or discarded on its arrival irrevocably. When every item has arrived, we transfer a subset of items in the current buffer into the knapsack. Our goal is to maximize the total value of the items in the knapsack. We consider four variants depending on whether items in the buffer are removable (i.e., we can remove items in the buffer) or non-removable, and proportional (i.e., the value of each item is proportional to its size) or general. For the general&non-removable case, we observe that no constant competitive algorithm exists for any R >= 1. For the proportional&non-removable case, we show that a simple greedy algorithm is optimal for every R >= 1. For the general&removable and the proportional&removable cases, we present optimal algorithms for small R and give asymptotically nearly optimal algorithms for general R.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.28/LIPIcs.ISAAC.2019.28.pdf
Online knapsack problem
Resource augmentation
Competitive analysis
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
29:1
29:22
10.4230/LIPIcs.ISAAC.2019.29
article
Local Cliques in ER-Perturbed Random Geometric Graphs
Kahle, Matthew
1
Tian, Minghao
2
Wang, Yusu
2
Department of Mathematics, The Ohio State University, USA
Computer Science and Engineering Dept., The Ohio State University, USA
We study a random graph model introduced in [Srinivasan Parthasarathy et al., 2017] where one adds Erdős - Rényi (ER) type perturbation to a random geometric graph. More precisely, assume G_X^* is a random geometric graph sampled from a nice measure on a metric space X = (X,d). An ER-perturbed random geometric graph G^(p,q) is generated by removing each existing edge from G_X^* with probability p, while inserting each non-existent edge to G_X^* with probability q. We consider a localized version of clique number for G^(p,q): Specifically, we study the edge clique number for each edge in a graph, defined as the size of the largest clique(s) in the graph containing that edge. We show that the edge clique number presents two fundamentally different types of behaviors in G^(p,q), depending on which "type" of randomness it is generated from.
As an application of the above results, we show that by a simple filtering process based on the edge clique number, we can recover the shortest-path metric of the random geometric graph G_X^* within a multiplicative factor of 3 from an ER-perturbed observed graph G^(p,q), for a significantly wider range of insertion probability q than what is required in [Srinivasan Parthasarathy et al., 2017].
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.29/LIPIcs.ISAAC.2019.29.pdf
random graphs
random geometric graphs
edge clique number
the probabilistic method
metric recovery
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
30:1
30:13
10.4230/LIPIcs.ISAAC.2019.30
article
Local Routing in Sparse and Lightweight Geometric Graphs
Ashvinkumar, Vikrant
1
Gudmundsson, Joachim
1
Levcopoulos, Christos
2
Nilsson, Bengt J.
3
van Renssen, André
1
University of Sydney, Australia
Lund University, Sweden
Malmö University, Sweden
Online routing in a planar embedded graph is central to a number of fields and has been studied extensively in the literature. For most planar graphs no O(1)-competitive online routing algorithm exists. A notable exception is the Delaunay triangulation for which Bose and Morin [Bose and Morin, 2004] showed that there exists an online routing algorithm that is O(1)-competitive. However, a Delaunay triangulation can have Omega(n) vertex degree and a total weight that is a linear factor greater than the weight of a minimum spanning tree.
We show a simple construction, given a set V of n points in the Euclidean plane, of a planar geometric graph on V that has small weight (within a constant factor of the weight of a minimum spanning tree on V), constant degree, and that admits a local routing strategy that is O(1)-competitive. Moreover, the technique used to bound the weight works generally for any planar geometric graph whilst preserving the admission of an O(1)-competitive routing strategy.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.30/LIPIcs.ISAAC.2019.30.pdf
Computational geometry
Spanners
Routing
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
31:1
31:12
10.4230/LIPIcs.ISAAC.2019.31
article
Searching for Cryptogenography Upper Bounds via Sum of Square Programming
Scheder, Dominik
1
Tang, Shuyang
1
Zhang, Jiaheng
2
Shanghai Jiao Tong University, China
UC Berkeley, USA
Cryptogenography is a secret-leaking game in which one of n players is holding a secret to be leaked. The n players engage in communication as to (1) reveal the secret while (2) keeping the identity of the secret holder as obscure as possible. All communication is public, and no computational hardness assumptions are made, i.e., the setting is purely information theoretic. Brody, Jakobsen, Scheder, and Winkler [Joshua Brody et al., 2014] formally defined this problem, showed that it has an equivalent geometric characterization, and gave upper and lower bounds for the case in which the n players want to leak a single bit. Surprisingly, even the easiest case, where two players want to leak a secret consisting of a single bit, is not completely understood. Doerr and Künnemann [Benjamin Doerr and Marvin Künnemann, 2016] showed how to automatically search for good protocols using a computer, thus finding an improved protocol for the 1-bit two-player case. In this work, we show how the search for upper bounds (impossibility results) can be formulated as a Sum of Squares program. We implement this idea for the 1-bit two-player case and significantly improve the previous upper bound from 47/128 = 0.3671875 to 0.35183.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.31/LIPIcs.ISAAC.2019.31.pdf
Communication Complexity
Secret Leaking
Sum of Squares Programming
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
32:1
32:12
10.4230/LIPIcs.ISAAC.2019.32
article
On the Complexity of Lattice Puzzles
Kobayashi, Yasuaki
1
Suetsugu, Koki
2
https://orcid.org/0000-0003-2529-7501
Tsuiki, Hideki
3
Uehara, Ryuhei
4
https://orcid.org/0000-0003-0895-3765
Graduate School of Informatics, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto 606-8501 Japan
National Institute of Informatics, Hitotsubashi, Chiyoda-ku, Tokyo 101-8430 Japan
Graduate School of Human and Environmental Studies, Kyoto University, Yoshida Nihonmatsu-cho, Sakyo-ku, Kyoto 606-8501 Japan
School of Information Science, Japan Advanced Institute of Science and Technology, Asahidai, Nomi, Ishikawa 923-1292, Japan
In this paper, we investigate the computational complexity of lattice puzzle, which is one of the traditional puzzles. A lattice puzzle consists of 2n plates with some slits, and the goal of this puzzle is to assemble them to form a lattice of size n x n. It has a long history in the puzzle society; however, there is no known research from the viewpoint of theoretical computer science. This puzzle has some natural variants, and they characterize representative computational complexity classes in the class NP. Especially, one of the natural variants gives a characterization of the graph isomorphism problem. That is, the variant is GI-complete in general. As far as the authors know, this is the first non-trivial GI-complete problem characterized by a classic puzzle. Like the sliding block puzzles, this simple puzzle can be used to characterize several representative computational complexity classes. That is, it gives us new insight of these computational complexity classes.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.32/LIPIcs.ISAAC.2019.32.pdf
Lattice puzzle
NP-completeness
GI-completeness
FPT algorithm
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
33:1
33:16
10.4230/LIPIcs.ISAAC.2019.33
article
The I/O Complexity of Hybrid Algorithms for Square Matrix Multiplication
De Stefani, Lorenzo
1
Department of Computer Science, Brown University, United States of America
Asymptotically tight lower bounds are derived for the I/O complexity of a general class of hybrid algorithms computing the product of n x n square matrices combining "Strassen-like" fast matrix multiplication approach with computational complexity Theta(n^{log_2 7}), and "standard" matrix multiplication algorithms with computational complexity Omega (n^3). We present a novel and tight Omega ((n/max{sqrt M, n_0})^{log_2 7}(max{1,(n_0)/M})^3M) lower bound for the I/O complexity of a class of "uniform, non-stationary" hybrid algorithms when executed in a two-level storage hierarchy with M words of fast memory, where n_0 denotes the threshold size of sub-problems which are computed using standard algorithms with algebraic complexity Omega (n^3).
The lower bound is actually derived for the more general class of "non-uniform, non-stationary" hybrid algorithms which allow recursive calls to have a different structure, even when they refer to the multiplication of matrices of the same size and in the same recursive level, although the quantitative expressions become more involved. Our results are the first I/O lower bounds for these classes of hybrid algorithms. All presented lower bounds apply even if the recomputation of partial results is allowed and are asymptotically tight.
The proof technique combines the analysis of the Grigoriev’s flow of the matrix multiplication function, combinatorial properties of the encoding functions used by fast Strassen-like algorithms, and an application of the Loomis-Whitney geometric theorem for the analysis of standard matrix multiplication algorithms. Extensions of the lower bounds for a parallel model with P processors are also discussed.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.33/LIPIcs.ISAAC.2019.33.pdf
I/O complexity
Hybrid Algorithm
Matrix Multiplication
Recomputation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
34:1
34:16
10.4230/LIPIcs.ISAAC.2019.34
article
Accurate MapReduce Algorithms for k-Median and k-Means in General Metric Spaces
Mazzetto, Alessio
1
Pietracaprina, Andrea
2
Pucci, Geppino
2
Department of Computer Science, Brown University, Providence, USA
Department of Information Engineering, University of Padova, Padova, Italy
Center-based clustering is a fundamental primitive for data analysis and becomes very challenging for large datasets. In this paper, we focus on the popular k-median and k-means variants which, given a set P of points from a metric space and a parameter k<|P|, require to identify a set S of k centers minimizing, respectively, the sum of the distances and of the squared distances of all points in P from their closest centers. Our specific focus is on general metric spaces, for which it is reasonable to require that the centers belong to the input set (i.e., S subseteq P). We present coreset-based 3-round distributed approximation algorithms for the above problems using the MapReduce computational model. The algorithms are rather simple and obliviously adapt to the intrinsic complexity of the dataset, captured by the doubling dimension D of the metric space. Remarkably, the algorithms attain approximation ratios that can be made arbitrarily close to those achievable by the best known polynomial-time sequential approximations, and they are very space efficient for small D, requiring local memory sizes substantially sublinear in the input size. To the best of our knowledge, no previous distributed approaches were able to attain similar quality-performance guarantees in general metric spaces.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.34/LIPIcs.ISAAC.2019.34.pdf
Clustering
k-median
k-means
MapReduce
Coreset
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
35:1
35:16
10.4230/LIPIcs.ISAAC.2019.35
article
On Optimal Balance in B-Trees: What Does It Cost to Stay in Perfect Shape?
Fagerberg, Rolf
1
Hammer, David
2
1
Meyer, Ulrich
2
University of Southern Denmark, Odense, Denmark
Goethe University Frankfurt, Germany
Any B-tree has height at least ceil[log_B(n)]. Static B-trees achieving this height are easy to build. In the dynamic case, however, standard B-tree rebalancing algorithms only maintain a height within a constant factor of this optimum. We investigate exactly how close to ceil[log_B(n)] the height of dynamic B-trees can be maintained as a function of the rebalancing cost. In this paper, we prove a lower bound on the cost of maintaining optimal height ceil[log_B(n)], which shows that this cost must increase from Omega(1/B) to Omega(n/B) rebalancing per update as n grows from one power of B to the next. We also provide an almost matching upper bound, demonstrating this lower bound to be essentially tight. We then give a variant upper bound which can maintain near-optimal height at low cost. As two special cases, we can maintain optimal height for all but a vanishing fraction of values of n using Theta(log_B(n)) amortized rebalancing cost per update and we can maintain a height of optimal plus one using O(1/B) amortized rebalancing cost per update. More generally, for any rebalancing budget, we can maintain (as n grows from one power of B to the next) optimal height essentially up to the point where the lower bound requires the budget to be exceeded, after which optimal height plus one is maintained. Finally, we prove that this balancing scheme gives B-trees with very good storage utilization.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.35/LIPIcs.ISAAC.2019.35.pdf
B-trees
Data structures
Lower bounds
Complexity
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
36:1
36:18
10.4230/LIPIcs.ISAAC.2019.36
article
How Does Object Fatness Impact the Complexity of Packing in d Dimensions?
Kisfaludi-Bak, Sándor
1
Marx, Dániel
1
van der Zanden, Tom C.
2
Max Planck Institut für Infromatik, Saarbrücken, Germany
Department of Data Analytics and Digitalisation, Maastricht University, The Netherlands
Packing is a classical problem where one is given a set of subsets of Euclidean space called objects, and the goal is to find a maximum size subset of objects that are pairwise non-intersecting. The problem is also known as the Independent Set problem on the intersection graph defined by the objects. Although the problem is NP-complete, there are several subexponential algorithms in the literature. One of the key assumptions of such algorithms has been that the objects are fat, with a few exceptions in two dimensions; for example, the packing problem of a set of polygons in the plane surprisingly admits a subexponential algorithm. In this paper we give tight running time bounds for packing similarly-sized non-fat objects in higher dimensions.
We propose an alternative and very weak measure of fatness called the stabbing number, and show that the packing problem in Euclidean space of constant dimension d >=slant 3 for a family of similarly sized objects with stabbing number alpha can be solved in 2^O(n^(1-1/d) alpha) time. We prove that even in the case of axis-parallel boxes of fixed shape, there is no 2^o(n^(1-1/d) alpha) algorithm under ETH. This result smoothly bridges the whole range of having constant-fat objects on one extreme (alpha=1) and a subexponential algorithm of the usual running time, and having very "skinny" objects on the other extreme (alpha=n^(1/d)), where we cannot hope to improve upon the brute force running time of 2^O(n), and thereby characterizes the impact of fatness on the complexity of packing in case of similarly sized objects. We also study the same problem when parameterized by the solution size k, and give a n^O(k^(1-1/d) alpha) algorithm, with an almost matching lower bound: there is no algorithm with running time of the form f(k) n^o(k^(1-1/d) alpha/log k) under ETH. One of our main tools in these reductions is a new wiring theorem that may be of independent interest.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.36/LIPIcs.ISAAC.2019.36.pdf
Geometric intersection graph
Independent Set
Object fatness
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
37:1
37:17
10.4230/LIPIcs.ISAAC.2019.37
article
On One-Round Discrete Voronoi Games
de Berg, Mark
1
Kisfaludi-Bak, Sándor
2
Mehr, Mehran
1
Department of Mathematics and Computer Science, TU Eindhoven, The Netherlands
Max Planck Institut für Infromatik, Saarbrücken, Germany
Let V be a multiset of n points in R^d, which we call voters, and let k >=slant 1 and l >=slant 1 be two given constants. We consider the following game, where two players P and Q compete over the voters in V: First, player P selects a set P of k points in R^d, and then player Q selects a set Q of l points in R^d. Player P wins a voter v in V iff dist(v,P) <=slant dist(v,Q), where dist(v,P) := min_{p in P} dist(v,p) and dist(v,Q) is defined similarly. Player P wins the game if he wins at least half the voters. The algorithmic problem we study is the following: given V, k, and l, how efficiently can we decide if player P has a winning strategy, that is, if P can select his k points such that he wins the game no matter where Q places her points.
Banik et al. devised a singly-exponential algorithm for the game in R^1, for the case k=l. We improve their result by presenting the first polynomial-time algorithm for the game in R^1. Our algorithm can handle arbitrary values of k and l. We also show that if d >= 2, deciding if player P has a winning strategy is Sigma_2^P-hard when k and l are part of the input. Finally, we prove that for any dimension d, the problem is contained in the complexity class exists for all R, and we give an algorithm that works in polynomial time for fixed k and l.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.37/LIPIcs.ISAAC.2019.37.pdf
competitive facility location
plurality point
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
38:1
38:13
10.4230/LIPIcs.ISAAC.2019.38
article
On Explicit Branching Programs for the Rectangular Determinant and Permanent Polynomials
Arvind, V.
1
Chatterjee, Abhranil
1
Datta, Rajit
2
Mukhopadhyay, Partha
2
Institute of Mathematical Sciences (HBNI), Chennai, India
Chennai Mathematical Institute, Chennai, India
We study the arithmetic circuit complexity of some well-known family of polynomials through the lens of parameterized complexity. Our main focus is on the construction of explicit algebraic branching programs (ABP) for determinant and permanent polynomials of the rectangular symbolic matrix in both commutative and noncommutative settings. The main results are:
- We show an explicit O^*(binom{n}{downarrow k/2})-size ABP construction for noncommutative permanent polynomial of k x n symbolic matrix. We obtain this via an explicit ABP construction of size O^*(binom{n}{downarrow k/2}) for S_{n,k}^*, noncommutative symmetrized version of the elementary symmetric polynomial S_{n,k}.
- We obtain an explicit O^*(2^k)-size ABP construction for the commutative rectangular determinant polynomial of the k x n symbolic matrix.
- In contrast, we show that evaluating the rectangular noncommutative determinant over rational matrices is #W[1]-hard.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.38/LIPIcs.ISAAC.2019.38.pdf
Determinant
Permanent
Parameterized Complexity
Branching Programs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
39:1
39:12
10.4230/LIPIcs.ISAAC.2019.39
article
A Competitive Algorithm for Random-Order Stochastic Virtual Circuit Routing
Nguyễn Kim, Thắng
1
https://orcid.org/0000-0002-6085-9453
IBISC, Univ Evry, University Paris Saclay, Evry, France
We consider the virtual circuit routing problem in the stochastic model with uniformly random arrival requests. In the problem, a graph is given and requests arrive in a uniform random order. Each request is specified by its connectivity demand and the load of a request on an edge is a random variable with known distribution. The objective is to satisfy the connectivity request demands while maintaining the expected congestion (the maximum edge load) of the underlying network as small as possible.
Despite a large literature on congestion minimization in the deterministic model, not much is known in the stochastic model even in the offline setting. In this paper, we present an O(log n/log log n)-competitive algorithm when optimal routing is sufficiently congested. This ratio matches to the lower bound Omega(log n/ log log n) (assuming some reasonable complexity assumption) in the offline setting. Additionally, we show that, restricting on the offline setting with deterministic loads, our algorithm yields the tight approximation ratio of Theta(log n/log log n). The algorithm is essentially greedy (without solving LP/rounding) and the simplicity makes it practically appealing.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.39/LIPIcs.ISAAC.2019.39.pdf
Approximation Algorithms
Congestion Minimization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
40:1
40:12
10.4230/LIPIcs.ISAAC.2019.40
article
An Improved Data Structure for Left-Right Maximal Generic Words Problem
Fujishige, Yuta
1
Nakashima, Yuto
1
Inenaga, Shunsuke
1
Bannai, Hideo
1
https://orcid.org/0000-0002-6856-5185
Takeda, Masayuki
1
Department of Informatics, Kyushu University, Japan
For a set D of documents and a positive integer d, a string w is said to be d-left-right maximal, if (1) w occurs in at least d documents in D, and (2) any proper superstring of w occurs in less than d documents. The left-right-maximal generic words problem is, given a set D of documents, to preprocess D so that for any string p and for any positive integer d, all the superstrings of p that are d-left-right maximal can be answered quickly. In this paper, we present an O(n log m) space data structure (in words) which answers queries in O(|p| + o log log m) time, where n is the total length of documents in D, m is the number of documents in D and o is the number of outputs. Our solution improves the previous one by Nishimoto et al. (PSC 2015), which uses an O(n log n) space data structure answering queries in O(|p|+ r * log n + o * log^2 n) time, where r is the number of right-extensions q of p occurring in at least d documents such that any proper right extension of q occurs in less than d documents.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.40/LIPIcs.ISAAC.2019.40.pdf
generic words
suffix trees
string processing algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
41:1
41:14
10.4230/LIPIcs.ISAAC.2019.41
article
Parameterized Complexity Classification of Deletion to List Matrix-Partition for Low-Order Matrices
Agrawal, Akanksha
1
Kolay, Sudeshna
1
Madathil, Jayakrishnan
2
Saurabh, Saket
3
2
Ben-Gurion University of the Negev, Beer-Sheva, Israel
The Institute of Mathematical Sciences, HBNI, Chennai, India
University of Bergen, Bergen, Norway
Given a symmetric l x l matrix M=(m_{i,j}) with entries in {0,1,*}, a graph G and a function L : V(G) - > 2^{[l]} (where [l] = {1,2,...,l}), a list M-partition of G with respect to L is a partition of V(G) into l parts, say, V_1, V_2, ..., V_l such that for each i,j in {1,2,...,l}, (i) if m_{i,j}=0 then for any u in V_i and v in V_j, uv not in E(G), (ii) if m_{i,j}=1 then for any (distinct) u in V_i and v in V_j, uv in E(G), (iii) for each v in V(G), if v in V_i then i in L(v). We consider the Deletion to List M-Partition problem that takes as input a graph G, a list function L:V(G) - > 2^[l] and a positive integer k. The aim is to determine whether there is a k-sized set S subseteq V(G) such that G-S has a list M-partition. Many important problems like Vertex Cover, Odd Cycle Transversal, Split Vertex Deletion, Multiway Cut and Deletion to List Homomorphism are special cases of the Deletion to List M-Partition problem. In this paper, we provide a classification of the parameterized complexity of Deletion to List M-Partition, parameterized by k, (a) when M is of order at most 3, and (b) when M is of order 4 with all diagonal entries belonging to {0,1}.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.41/LIPIcs.ISAAC.2019.41.pdf
list matrix partitions
parameterized classification
Almost 2-SAT
important separators
iterative compression
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
42:1
42:15
10.4230/LIPIcs.ISAAC.2019.42
article
The Generalized Microscopic Image Reconstruction Problem
Bar-Noy, Amotz
1
Böhnlein, Toni
2
Lotker, Zvi
3
2
Peleg, David
4
Rawitz, Dror
2
City University of New York (CUNY), USA
Bar Ilan University, Ramat-Gan, Israel
Ben Gurion University of the Negev, Beer Sheva, Israel
Weizmann Institute of Science, Rehovot, Israel
This paper presents and studies a generalization of the microscopic image reconstruction problem (MIR) introduced by Frosini and Nivat [Andrea Frosini and Maurice Nivat, 2007; Nivat, 2002]. Consider a specimen for inspection, represented as a collection of points typically organized on a grid in the plane. Assume each point x has an associated physical value l_x, which we would like to determine. However, it might be that obtaining these values precisely (by a surgical probe) is difficult, risky, or impossible. The alternative is to employ aggregate measuring techniques (such as EM, CT, US or MRI), whereby each measurement is taken over a larger window, and the exact values at each point are subsequently extracted by computational methods.
In this paper we extend the MIR framework in a number of ways. First, we consider a generalized setting where the inspected object is represented by an arbitrary graph G, and the vector l in R^n assigns a value l_v to each node v. A probe centered at a vertex v will capture a window encompassing its entire neighborhood N[v], i.e., the outcome of a probe centered at v is P_v = sum_{w in N[v]} l_w. We give a criterion for the graphs for which the extended MIR problem can be solved by extracting the vector l from the collection of probes, P^- = {P_v | v in V}.
We then consider cases where such reconstruction is impossible (namely, graphs G for which the probe vector P is inconclusive, in the sense that there may be more than one vector l yielding P). Let us assume that surgical probes (whose outcome at vertex v is the exact value of l_v) are technically available to us (yet are expensive or risky, and must be used sparingly). We show that in such cases, it may still be possible to achieve reconstruction based on a combination of a collection of standard probes together with a suitable set of surgical probes. We aim at identifying the minimum number of surgical probes necessary for a unique reconstruction, depending on the graph topology. This is referred to as the Minimum Surgical Probing problem (MSP).
Besides providing a solution for the above problems for arbitrary graphs, we also explore the range of possible behaviors of the Minimum Surgical Probing problem by determining the number of surgical probes necessary in certain specific graph families, such as perfect k-ary trees, paths, cycles, grids, tori and tubes.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.42/LIPIcs.ISAAC.2019.42.pdf
Discrete mathematics
Combinatorics
Reconstruction algorithm
Image reconstruction
Graph spectra
Grid graphs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
43:1
43:19
10.4230/LIPIcs.ISAAC.2019.43
article
Stabilization Time in Minority Processes
Papp, Pál András
1
Wattenhofer, Roger
1
ETH Zürich, Switzerland
We analyze the stabilization time of minority processes in graphs. A minority process is a dynamically changing coloring, where each node repeatedly changes its color to the color which is least frequent in its neighborhood. First, we present a simple Omega(n^2) stabilization time lower bound in the sequential adversarial model. Our main contribution is a graph construction which proves a Omega(n^(2-epsilon)) stabilization time lower bound for any epsilon>0. This lower bound holds even if the order of nodes is chosen benevolently, not only in the sequential model, but also in any reasonable concurrent model of the process.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.43/LIPIcs.ISAAC.2019.43.pdf
Minority process
Benevolent model
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
44:1
44:14
10.4230/LIPIcs.ISAAC.2019.44
article
Parameterized Complexity of Stable Roommates with Ties and Incomplete Lists Through the Lens of Graph Parameters
Bredereck, Robert
1
https://orcid.org/0000-0002-6303-6276
Heeger, Klaus
1
https://orcid.org/0000-0001-8779-0890
Knop, Dušan
1
2
https://orcid.org/0000-0003-2588-5709
Niedermeier, Rolf
1
https://orcid.org/0000-0003-1703-1236
Technische Universität Berlin, Chair of Algorithmics and Computational Complexity, Germany
Department of Theoretical Computer Science, Faculty of Information Technology, Czech Technical University in Prague, Prague, Czech Republic
We continue and extend previous work on the parameterized complexity analysis of the NP-hard Stable Roommates with Ties and Incomplete Lists problem, thereby strengthening earlier results both on the side of parameterized hardness as well as on the side of fixed-parameter tractability. Other than for its famous sister problem Stable Marriage which focuses on a bipartite scenario, Stable Roommates with Incomplete Lists allows for arbitrary acceptability graphs whose edges specify the possible matchings of each two agents (agents are represented by graph vertices). Herein, incomplete lists and ties reflect the fact that in realistic application scenarios the agents cannot bring all other agents into a linear order. Among our main contributions is to show that it is W[1]-hard to compute a maximum-cardinality stable matching for acceptability graphs of bounded treedepth, bounded tree-cut width, and bounded feedback vertex number (these are each time the respective parameters). However, if we "only" ask for perfect stable matchings or the mere existence of a stable matching, then we obtain fixed-parameter tractability with respect to tree-cut width but not with respect to treedepth. On the positive side, we also provide fixed-parameter tractability results for the parameter feedback edge set number.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.44/LIPIcs.ISAAC.2019.44.pdf
Stable matching
acceptability graph
fixed-parameter tractability
W[1]-hardness
treewidth
treedepth
tree-cut width
feedback set numbers
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
45:1
45:17
10.4230/LIPIcs.ISAAC.2019.45
article
Path and Ancestor Queries over Trees with Multidimensional Weight Vectors
He, Meng
1
Kazi, Serikzhan
1
Faculty of Computer Science, Dalhousie University, Canada
We consider an ordinal tree T on n nodes, with each node assigned a d-dimensional weight vector w in {1,2,...,n}^d, where d in N is a constant. We study path queries as generalizations of well-known {orthogonal range queries}, with one of the dimensions being tree topology rather than a linear order. Since in our definitions d only represents the number of dimensions of the weight vector without taking the tree topology into account, a path query in a tree with d-dimensional weight vectors generalize the corresponding (d+1)-dimensional orthogonal range query. We solve {ancestor dominance reporting} problem as a direct generalization of dominance reporting problem, in time O(lg^{d-1}{n}+k) and space of O(n lg^{d-2}n) words, where k is the size of the output, for d >= 2. We also achieve a tradeoff of O(n lg^{d-2+epsilon}{n}) words of space, with query time of O((lg^{d-1} n)/(lg lg n)^{d-2}+k), for the same problem, when d >= 3. We solve {path successor problem} in O(n lg^{d-1}{n}) words of space and time O(lg^{d-1+epsilon}{n}) for d >= 1 and an arbitrary constant epsilon > 0. We propose a solution to {path counting problem}, with O(n(lg{n}/lg lg{n})^{d-1}) words of space and O((lg{n}/lg lg{n})^{d}) query time, for d >= 1. Finally, we solve {path reporting problem} in O(n lg^{d-1+epsilon}{n}) words of space and O((lg^{d-1}{n})/(lg lg{n})^{d-2}+k) query time, for d >= 2. These results match or nearly match the best tradeoffs of the respective range queries. We are also the first to solve path successor even for d = 1.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.45/LIPIcs.ISAAC.2019.45.pdf
path queries
range queries
algorithms
data structures
theory
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
46:1
46:20
10.4230/LIPIcs.ISAAC.2019.46
article
A 21/16-Approximation for the Minimum 3-Path Partition Problem
Chen, Yong
1
Goebel, Randy
2
Su, Bing
3
Tong, Weitian
4
https://orcid.org/0000-0002-9815-2330
Xu, Yao
5
Zhang, An
1
Department of Mathematics, Hangzhou Dianzi University, Hangzhou, Zhejiang, China
Department of Computing Science, University of Alberta, Edmonton, Alberta T6G 2E8, Canada
School of Economics and Management, Xi'an Technological University, Xi'an, Shaanxi, China
Department of Computer Science, Eastern Michigan University, Ypsilanti, Michigan 48197, USA
Department of Computer Science, Kettering University, Flint, Michigan 48504, USA
The minimum k-path partition (Min-k-PP for short) problem targets to partition an input graph into the smallest number of paths, each of which has order at most k. We focus on the special case when k=3. Existing literature mainly concentrates on the exact algorithms for special graphs, such as trees. Because of the challenge of NP-hardness on general graphs, the approximability of the Min-3-PP problem attracts researchers' attention. The first approximation algorithm dates back about 10 years and achieves an approximation ratio of 3/2, which was recently improved to 13/9 and further to 4/3. We investigate the 3/2-approximation algorithm for the Min-3-PP problem and discover several interesting structural properties. Instead of studying the unweighted Min-3-PP problem directly, we design a novel weight schema for l-paths, l in {1, 2, 3}, and investigate the weighted version. A greedy local search algorithm is proposed to generate a heavy path partition. We show the achieved path partition has the least 1-paths, which is also the key ingredient for the algorithms with ratios 13/9 and 4/3. When switching back to the unweighted objective function, we prove the approximation ratio 21/16 via amortized analysis.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.46/LIPIcs.ISAAC.2019.46.pdf
3-path partition
exact set cover
approximation algorithm
local search
amortized analysis
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
47:1
47:15
10.4230/LIPIcs.ISAAC.2019.47
article
Efficiently Realizing Interval Sequences
Bar-Noy, Amotz
1
Choudhary, Keerti
2
Peleg, David
2
Rawitz, Dror
3
City University of New York (CUNY), USA
Weizmann Institute of Science, Rehovot, Israel
Bar Ilan University, Ramat-Gan, Israel
We consider the problem of realizable interval-sequences. An interval sequence comprises of n integer intervals [a_i,b_i] such that 0 <= a_i <= b_i <= n-1, and is said to be graphic/realizable if there exists a graph with degree sequence, say, D=(d_1,...,d_n) satisfying the condition a_i <= d_i <= b_i, for each i in [1,n]. There is a characterisation (also implying an O(n) verifying algorithm) known for realizability of interval-sequences, which is a generalization of the Erdös-Gallai characterisation for graphic sequences. However, given any realizable interval-sequence, there is no known algorithm for computing a corresponding graphic certificate in o(n^2) time.
In this paper, we provide an O(n log n) time algorithm for computing a graphic sequence for any realizable interval sequence. In addition, when the interval sequence is non-realizable, we show how to find a graphic sequence having minimum deviation with respect to the given interval sequence, in the same time. Finally, we consider variants of the problem such as computing the most regular graphic sequence, and computing a minimum extension of a length p non-graphic sequence to a graphic one.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.47/LIPIcs.ISAAC.2019.47.pdf
Graph realization
graphic sequence
interval sequence
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
48:1
48:19
10.4230/LIPIcs.ISAAC.2019.48
article
Efficient Interactive Proofs for Linear Algebra
Cormode, Graham
1
https://orcid.org/0000-0002-0698-0922
Hickey, Chris
1
University of Warwick, UK
Motivated by the growth in outsourced data analysis, we describe methods for verifying basic linear algebra operations performed by a cloud service without having to recalculate the entire result. We provide novel protocols in the streaming setting for inner product, matrix multiplication and vector-matrix-vector multiplication where the number of rounds of interaction can be adjusted to tradeoff space, communication, and duration of the protocol. Previous work suggests that the costs of these interactive protocols are optimized by choosing O(log n) rounds. However, we argue that we can reduce the number of rounds without incurring a significant time penalty by considering the total end-to-end time, so fewer rounds and larger messages are preferable. We confirm this claim with an experimental study that shows that a constant number of rounds gives the fastest protocol.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.48/LIPIcs.ISAAC.2019.48.pdf
Streaming Interactive Proofs
Linear Algebra
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
49:1
49:22
10.4230/LIPIcs.ISAAC.2019.49
article
When Maximum Stable Set Can Be Solved in FPT Time
Bonnet, Édouard
1
https://orcid.org/0000-0002-1653-5822
Bousquet, Nicolas
2
Thomassé, Stéphan
1
3
Watrigant, Rémi
1
https://orcid.org/0000-0002-6243-5910
Univ Lyon, CNRS, ENS de Lyon, Université Claude Bernard Lyon 1, LIP UMR5668, France
CNRS, G-SCOP laboratory, Grenoble-INP, France
Institut Universitaire de France
Maximum Independent Set (MIS for short) is in general graphs the paradigmatic W[1]-hard problem. In stark contrast, polynomial-time algorithms are known when the inputs are restricted to structured graph classes such as, for instance, perfect graphs (which includes bipartite graphs, chordal graphs, co-graphs, etc.) or claw-free graphs. In this paper, we introduce some variants of co-graphs with parameterized noise, that is, graphs that can be made into disjoint unions or complete sums by the removal of a certain number of vertices and the addition/deletion of a certain number of edges per incident vertex, both controlled by the parameter. We give a series of FPT Turing-reductions on these classes and use them to make some progress on the parameterized complexity of MIS in H-free graphs. We show that for every fixed t >=slant 1, MIS is FPT in P(1,t,t,t)-free graphs, where P(1,t,t,t) is the graph obtained by substituting all the vertices of a four-vertex path but one end of the path by cliques of size t. We also provide randomized FPT algorithms in dart-free graphs and in cricket-free graphs. This settles the FPT/W[1]-hard dichotomy for five-vertex graphs H.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.49/LIPIcs.ISAAC.2019.49.pdf
Parameterized Algorithms
Independent Set
H-Free Graphs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
50:1
50:15
10.4230/LIPIcs.ISAAC.2019.50
article
The k-Fréchet Distance: How to Walk Your Dog While Teleporting
Alves Akitaya, Hugo
1
Buchin, Maike
2
Ryvkin, Leonie
2
Urhausen, Jérôme
3
Department of Computer Science, Tufts University, Massachusetts, USA
Department of Mathematics, Ruhr University Bochum, Germany
Department of Information and Computing Sciences, Utrecht University, Netherlands
We introduce a new distance measure for comparing polygonal chains: the k-Fréchet distance. As the name implies, it is closely related to the well-studied Fréchet distance but detects similarities between curves that resemble each other only piecewise. The parameter k denotes the number of subcurves into which we divide the input curves (thus we allow up to k-1 "teleports" on each input curve). The k-Fréchet distance provides a nice transition between (weak) Fréchet distance and Hausdorff distance. However, we show that deciding this distance measure turns out to be NP-hard, which is interesting since both (weak) Fréchet and Hausdorff distance are computable in polynomial time. Nevertheless, we give several possibilities to deal with the hardness of the k-Fréchet distance: besides a short exponential-time algorithm for the general case, we give a polynomial-time algorithm for k=2, i.e., we ask that we subdivide our input curves into two subcurves each. We can also approximate the optimal k by factor 2. We then present a more intricate FPT algorithm using parameters k (the number of allowed subcurves) and z (the number of segments of one curve that intersect the epsilon-neighborhood of a point on the other curve).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.50/LIPIcs.ISAAC.2019.50.pdf
Measures
Fréchet distance
Hardness
FPT
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
51:1
51:21
10.4230/LIPIcs.ISAAC.2019.51
article
New Applications of Nearest-Neighbor Chains: Euclidean TSP and Motorcycle Graphs
Mamano, Nil
1
https://orcid.org/0000-0003-0414-2885
Efrat, Alon
2
Eppstein, David
1
Frishberg, Daniel
1
https://orcid.org/0000-0002-1861-5439
Goodrich, Michael T.
1
https://orcid.org/0000-0002-8943-191X
Kobourov, Stephen
2
https://orcid.org/0000-0002-0477-2724
Matias, Pedro
1
https://orcid.org/0000-0003-0664-9145
Polishchuk, Valentin
3
Department of Computer Science, University of California, Irvine, USA
Department of Computer Science, University of Arizona, Tucson, USA
Communications and Transport Systems, ITN, Linköping University, Sweden
We show new applications of the nearest-neighbor chain algorithm, a technique that originated in agglomerative hierarchical clustering. We use it to construct the greedy multi-fragment tour for Euclidean TSP in O(n log n) time in any fixed dimension and for Steiner TSP in planar graphs in O(n sqrt(n)log n) time; we compute motorcycle graphs, a central step in straight skeleton algorithms, in O(n^(4/3+epsilon)) time for any epsilon>0.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.51/LIPIcs.ISAAC.2019.51.pdf
Nearest-neighbors
Nearest-neighbor chain
motorcycle graph
straight skeleton
multi-fragment algorithm
Euclidean TSP
Steiner TSP
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
52:1
52:21
10.4230/LIPIcs.ISAAC.2019.52
article
Efficient Circuit Simulation in MapReduce
Frei, Fabian
1
Wada, Koichi
2
Department of Computer Science, ETH Zürich, Universitätstrasse 6, CH-8006 Zürich, Switzerland
Department of Applied Informatics, Hosei University, 3-7-2 Kajino, 184-8584 Tokyo, Japan
The MapReduce framework has firmly established itself as one of the most widely used parallel computing platforms for processing big data on tera- and peta-byte scale. Approaching it from a theoretical standpoint has proved to be notoriously difficult, however. In continuation of Goodrich et al.’s early efforts, explicitly espousing the goal of putting the MapReduce framework on footing equal to that of long-established models such as the PRAM, we investigate the obvious complexity question of how the computational power of MapReduce algorithms compares to that of combinational Boolean circuits commonly used for parallel computations. Relying on the standard MapReduce model introduced by Karloff et al. a decade ago, we develop an intricate simulation technique to show that any problem in NC (i.e., a problem solved by a logspace-uniform family of Boolean circuits of polynomial size and a depth polylogarithmic in the input size) can be solved by a MapReduce computation in O(T(n)/log n) rounds, where n is the input size and T(n) is the depth of the witnessing circuit family. Thus, we are able to closely relate the standard, uniform NC hierarchy modeling parallel computations to the deterministic MapReduce hierarchy DMRC by proving that NC^{i+1} subseteq DMRC^i for all i in N. Besides the theoretical significance, this result has important applied aspects as well. In particular, we show for all problems in NC^1 - many practically relevant ones, such as integer multiplication and division and the parity function, being among these - how to solve them in a constant number of deterministic MapReduce rounds.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.52/LIPIcs.ISAAC.2019.52.pdf
MapReduce
Circuit Complexity
Parallel Algorithms
Nick’s Class NC
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
53:1
53:18
10.4230/LIPIcs.ISAAC.2019.53
article
Concurrent Distributed Serving with Mobile Servers
Ghodselahi, Abdolhamid
1
Kuhn, Fabian
2
Turau, Volker
1
Institute of Telematics, Hamburg University of Technology, Germany
Department of Computer Science, University of Freiburg, Germany
This paper introduces a new resource allocation problem in distributed computing called distributed serving with mobile servers (DSMS). In DSMS, there are k identical mobile servers residing at the processors of a network. At arbitrary points of time, any subset of processors can invoke one or more requests. To serve a request, one of the servers must move to the processor that invoked the request. Resource allocation is performed in a distributed manner since only the processor that invoked the request initially knows about it. All processors cooperate by passing messages to achieve correct resource allocation. They do this with the goal to minimize the communication cost.
Routing servers in large-scale distributed systems requires a scalable location service. We introduce the distributed protocol Gnn that solves the DSMS problem on overlay trees. We prove that Gnn is starvation-free and correctly integrates locating the servers and synchronizing the concurrent access to servers despite asynchrony, even when the requests are invoked over time. Further, we analyze Gnn for "one-shot" executions, i.e., all requests are invoked simultaneously. We prove that when running Gnn on top of a special family of tree topologies - known as hierarchically well-separated trees (HSTs) - we obtain a randomized distributed protocol with an expected competitive ratio of O(log n) on general network topologies with n processors. From a technical point of view, our main result is that Gnn optimally solves the DSMS problem on HSTs for one-shot executions, even if communication is asynchronous. Further, we present a lower bound of Omega(max {k, log n/log log n}) on the competitive ratio for DSMS. The lower bound even holds when communication is synchronous and requests are invoked sequentially.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.53/LIPIcs.ISAAC.2019.53.pdf
Distributed online resource allocation
Distributed directory
Asynchronous communication
Amortized analysis
Tree embeddings
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
54:1
54:17
10.4230/LIPIcs.ISAAC.2019.54
article
Tracking Paths in Planar Graphs
Eppstein, David
1
Goodrich, Michael T.
1
https://orcid.org/0000-0002-8943-191X
Liu, James A.
1
Matias, Pedro
1
https://orcid.org/0000-0003-0664-9145
Department of Computer Science, University of California, Irvine, USA
We consider the NP-complete problem of tracking paths in a graph, first introduced by Banik et al. [Banik et al., 2017]. Given an undirected graph with a source s and a destination t, find the smallest subset of vertices whose intersection with any s-t path results in a unique sequence. In this paper, we show that this problem remains NP-complete when the graph is planar and we give a 4-approximation algorithm in this setting. We also show, via Courcelle’s theorem, that it can be solved in linear time for graphs of bounded-clique width, when its clique decomposition is given in advance.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.54/LIPIcs.ISAAC.2019.54.pdf
Approximation Algorithm
Courcelle’s Theorem
Clique-Width
Planar
3-SAT
Graph Algorithms
NP-Hardness
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
55:1
55:15
10.4230/LIPIcs.ISAAC.2019.55
article
Distance Measures for Embedded Graphs
Akitaya, Hugo A.
1
Buchin, Maike
2
Kilgus, Bernhard
2
Sijben, Stef
2
Wenk, Carola
3
Department of Computer Science, Tufts University, Medford, MA, USA
Department of Mathematics, Ruhr University Bochum, Bochum, Germany
Department of Computer Science, Tulane University, New Orleans, LA, USA
We introduce new distance measures for comparing straight-line embedded graphs based on the Fréchet distance and the weak Fréchet distance. These graph distances are defined using continuous mappings and thus take the combinatorial structure as well as the geometric embeddings of the graphs into account. We present a general algorithmic approach for computing these graph distances. Although we show that deciding the distances is NP-hard for general embedded graphs, we prove that our approach yields polynomial time algorithms if the graphs are trees, and for the distance based on the weak Fréchet distance if the graphs are planar embedded. Moreover, we prove that deciding the distances based on the Fréchet distance remains NP-hard for planar embedded graphs and show how our general algorithmic approach yields an exponential time algorithm and a polynomial time approximation algorithm for this case. Our work combines and extends the work of Buchin et al. [Maike Buchin et al., 2017] and Akitaya et al. [Hugo Akitaya et al., 2019] presented at EuroCG.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.55/LIPIcs.ISAAC.2019.55.pdf
Fréchet distance
graph comparison
embedded graphs
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
56:1
56:21
10.4230/LIPIcs.ISAAC.2019.56
article
Online Algorithms for Warehouse Management
Dasler, Philip
1
https://orcid.org/0000-0001-7442-7216
Mount, David M.
1
https://orcid.org/0000-0002-3290-8932
Department of Computer Science, University of Maryland, College Park, USA
As the prevalence of E-commerce continues to grow, the efficient operation of warehouses and fulfillment centers is becoming increasingly important. To this end, many such warehouses are adding automation in order to help streamline operations, drive down costs, and increase overall efficiency. The introduction of automation comes with the opportunity for new theoretical models and computational problems with which to better understand and optimize such systems.
These systems often maintain a warehouse of standardized portable storage units, which are stored and retrieved by robotic workers. In general, there are two principal issues in optimizing such a system: where in the warehouse each storage unit should be located and how best to retrieve them. These two concerns naturally go hand-in-hand, but are further complicated by the unknown request frequencies of stored products. Analogous to virtual-memory systems, the more popular and oft-requested an item is, the more efficient its retrieval should be. In this paper, we propose a theoretical model for organizing portable storage units in a warehouse subject to an online sequence of access requests. We consider two formulations, depending on whether there is a single access point or multiple access points. We present algorithms that are O(1)-competitive with respect to an optimal algorithm. In the case of a single access point, our solution is also asymptotically optimal with respect to density.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.56/LIPIcs.ISAAC.2019.56.pdf
Warehouse management
online algorithms
competitive analysis
robotics
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
57:1
57:14
10.4230/LIPIcs.ISAAC.2019.57
article
On Approximate Range Mode and Range Selection
El-Zein, Hicham
1
He, Meng
2
Munro, J. Ian
1
Nekrich, Yakov
3
Sandlund, Bryce
1
Cheriton School of Computer Science, University of Waterloo, Canada
Faculty of Computer Science, Dalhousie University, Canada
Department of Computer Science, Michigan Technological University, USA
For any epsilon in (0,1), a (1+epsilon)-approximate range mode query asks for the position of an element whose frequency in the query range is at most a factor (1+epsilon) smaller than the true mode. For this problem, we design a data structure occupying O(n/epsilon) bits of space to answer queries in O(lg(1/epsilon)) time. This is an encoding data structure which does not require access to the input sequence; the space cost of this structure is asymptotically optimal for constant epsilon as we also prove a matching lower bound. Furthermore, our solution improves the previous best result of Greve et al. (Cell Probe Lower Bounds and Approximations for Range Mode, ICALP'10) by saving the space cost by a factor of lg n while achieving the same query time. In dynamic settings, we design an O(n)-word data structure that answers queries in O(lg n /lg lg n) time and supports insertions and deletions in O(lg n) time, for any constant epsilon in (0,1); the bounds for non-constant epsilon = o(1) are also given in the paper. This is the first result on dynamic approximate range mode; it can also be used to obtain the first static data structure for approximate 3-sided range mode queries in two dimensions.
Another problem we consider is approximate range selection. For any alpha in (0,1/2), an alpha-approximate range selection query asks for the position of an element whose rank in the query range is in [k - alpha s, k + alpha s], where k is a rank given by the query and s is the size of the query range. When alpha is a constant, we design an O(n)-bit encoding data structure that can answer queries in constant time and prove this space cost is asymptotically optimal. The previous best result by Krizanc et al. (Range Mode and Range Median Queries on Lists and Trees, Nordic Journal of Computing, 2005) uses O(n lg n) bits, or O(n) words, to achieve constant approximation for range median only. Thus we not only improve the space cost, but also provide support for any arbitrary k given at query time. We also analyse our solutions for non-constant alpha.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.57/LIPIcs.ISAAC.2019.57.pdf
data structures
approximate range query
range mode
range median
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
58:1
58:18
10.4230/LIPIcs.ISAAC.2019.58
article
External Memory Planar Point Location with Fast Updates
Iacono, John
1
2
Karsin, Ben
1
Koumoutsos, Grigorios
1
Université Libre de Bruxelles, Belgium
New York University, USA
We study dynamic planar point location in the External Memory Model or Disk Access Model (DAM). Previous work in this model achieves polylog query and polylog amortized update time. We present a data structure with O(log_B^2 N) query time and O(1/B^(1-epsilon) log_B N) amortized update time, where N is the number of segments, B the block size and epsilon is a small positive constant, under the assumption that all faces have constant size. This is a B^(1-epsilon) factor faster for updates than the fastest previous structure, and brings the cost of insertion and deletion down to subconstant amortized time for reasonable choices of N and B. Our structure solves the problem of vertical ray-shooting queries among a dynamic set of interior-disjoint line segments; this is well-known to solve dynamic planar point location for a connected subdivision of the plane with faces of constant size.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.58/LIPIcs.ISAAC.2019.58.pdf
point location
data structures
dynamic algorithms
computational geometry
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
59:1
59:19
10.4230/LIPIcs.ISAAC.2019.59
article
Minimizing and Computing the Inverse Geodesic Length on Trees
Gaspers, Serge
1
2
https://orcid.org/0000-0002-6947-9238
Lau, Joshua
1
https://orcid.org/0000-0001-7490-633X
UNSW Sydney, Australia
Data61, CSIRO, Sydney, Australia
For any fixed measure H that maps graphs to real numbers, the MinH problem is defined as follows: given a graph G, an integer k, and a target tau, is there a set S of k vertices that can be deleted, so that H(G - S) is at most tau? In this paper, we consider the MinH problem on trees.
We call H balanced on trees if, whenever G is a tree, there is an optimal choice of S such that the components of G - S have sizes bounded by a polynomial in n / k. We show that MinH on trees is Fixed-Parameter Tractable (FPT) for parameter n / k, and furthermore, can be solved in subexponential time, and polynomial space, whenever H is additive, balanced on trees, and computable in polynomial time.
A particular measure of interest is the Inverse Geodesic Length (IGL), which is used to gauge the efficiency and connectedness of a graph. It is defined as the sum of inverse distances between every two vertices: IGL(G) = sum_{{u,v} subseteq V} 1/d_G(u,v). While MinIGL is W[1]-hard for parameter treewidth, and cannot be solved in 2^{o(k + n + m)} time, even on bipartite graphs with n vertices and m edges, the complexity status of the problem remains open in the case where G is a tree. We show that IGL is balanced on trees, to give a 2^O((n log n)^(5/6)) time, polynomial space algorithm.
The distance distribution of G is the sequence {a_i} describing the number of vertex pairs distance i apart in G: a_i = |{{u, v}: d_G(u, v) = i}|. Given only the distance distribution, one can easily determine graph parameters such as diameter, Wiener index, and particularly, the IGL. We show that the distance distribution of a tree can be computed in O(n log^2 n) time by reduction to polynomial multiplication. We also extend the result to graphs with small treewidth by showing that the first p values of the distance distribution can be computed in 2^(O(tw(G))) n^(1 + epsilon) sqrt(p) time, and the entire distance distribution can be computed in 2^(O(tw(G))) n^{1 + epsilon} time, when the diameter of G is O(n^epsilon') for every epsilon' > 0.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.59/LIPIcs.ISAAC.2019.59.pdf
Trees
Treewidth
Fixed-Parameter Tractability
Inverse Geodesic Length
Vertex deletion
Polynomial multiplication
Distance distribution
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
60:1
60:15
10.4230/LIPIcs.ISAAC.2019.60
article
Result-Sensitive Binary Search with Noisy Information
Epa, Narthana S.
1
Gan, Junhao
1
https://orcid.org/0000-0001-9101-1503
Wirth, Anthony
1
https://orcid.org/0000-0003-3746-6704
School of Computing and Information Systems, The University of Melbourne, Victoria, Australia
We describe new algorithms for the predecessor problem in the Noisy Comparison Model. In this problem, given a sorted list L of n (distinct) elements and a query q, we seek the predecessor of q in L: denoted by u, the largest element less than or equal to q. In the Noisy Comparison Model, the result of a comparison between two elements is non-deterministic. Moreover, multiple comparisons of the same pair of elements might have different results: each is generated independently, and is correct with probability p > 1/2. Given an overall error tolerance Q, the cost of an algorithm is measured by the total number of noisy comparisons; these must guarantee the predecessor is returned with probability at least 1 - Q. Feige et al. showed that predecessor queries can be answered by a modified binary search with Theta(log (n/Q)) noisy comparisons.
We design result-sensitive algorithms for answering predecessor queries. The query cost is related to the index, k, of the predecessor u in L. Our first algorithm answers predecessor queries with O(log ((log^{*(c)} n)/Q) + log (k/Q)) noisy comparisons, for an arbitrarily large constant c. The function log^{*(c)} n iterates c times the iterated-logarithm function, log^* n. Our second algorithm is a genuinely result-sensitive algorithm whose expected query cost is bounded by O(log (k/Q)), and is guaranteed to terminate after at most O(log((log n)/Q)) noisy comparisons.
Our results strictly improve the state-of-the-art bounds when k is in omega(1) intersected with o(n^epsilon), where epsilon > 0 is some constant. Moreover, we show that our result-sensitive algorithms immediately improve not only predecessor-query algorithms, but also binary-search-like algorithms for solving key applications.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.60/LIPIcs.ISAAC.2019.60.pdf
Fault-tolerant search
random walks
noisy comparisons
predecessor queries
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
61:1
61:12
10.4230/LIPIcs.ISAAC.2019.61
article
Improved Algorithms for Clustering with Outliers
Feng, Qilong
1
Zhang, Zhen
1
Huang, Ziyun
2
Xu, Jinhui
3
Wang, Jianxin
1
School of Computer Science and Engineering, Central South University, P.R. China
Department of Computer Science and Software Engineering, Penn State Erie, The Behrend College, Erie, PA, USA
Department of Computer Science and Engineering, State University of New York at Buffalo, USA
Clustering is a fundamental problem in unsupervised learning. In many real-world applications, the to-be-clustered data often contains various types of noises and thus needs to be removed from the learning process. To address this issue, we consider in this paper two variants of such clustering problems, called k-median with m outliers and k-means with m outliers. Existing techniques for both problems either incur relatively large approximation ratios or can only efficiently deal with a small number of outliers. In this paper, we present improved solution to each of them for the case where k is a fixed number and m could be quite large. Particularly, we gave the first PTAS for the k-median problem with outliers in Euclidean space R^d for possibly high m and d. Our algorithm runs in O(nd((1/epsilon)(k+m))^(k/epsilon)^O(1)) time, which considerably improves the previous result (with running time O(nd(m+k)^O(m+k) + (1/epsilon)k log n)^O(1))) given by [Feldman and Schulman, SODA 2012]. For the k-means with outliers problem, we introduce a (6+epsilon)-approximation algorithm for general metric space with running time O(n(beta (1/epsilon)(k+m))^k) for some constant beta>1. Our algorithm first uses the k-means++ technique to sample O((1/epsilon)(k+m)) points from input and then select the k centers from them. Compared to the more involving existing techniques, our algorithms are much simpler, i.e., using only random sampling, and achieving better performance ratios.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.61/LIPIcs.ISAAC.2019.61.pdf
Clustering with Outliers
Approximation
Random Sampling
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
62:1
62:15
10.4230/LIPIcs.ISAAC.2019.62
article
Unbounded Regions of High-Order Voronoi Diagrams of Lines and Segments in Higher Dimensions
Barequet, Gill
1
Papadopoulou, Evanthia
2
https://orcid.org/0000-0003-0144-7384
Suderland, Martin
2
https://orcid.org/0000-0002-6604-6381
Dept. of Computer Science, The Technion - Israel Inst. of Technology, Haifa 3200003, Israel
Faculty of Informatics, Università della Svizzera italiana, Lugano, Switzerland
We study the behavior at infinity of the farthest and the higher-order Voronoi diagram of n line segments or lines in a d-dimensional Euclidean space. The unbounded parts of these diagrams can be encoded by a Gaussian map on the sphere of directions S^(d-1). We show that the combinatorial complexity of the Gaussian map for the order-k Voronoi diagram of n line segments or lines is O(min{k,n-k} n^(d-1)), which is tight for n-k = O(1). All the d-dimensional cells of the farthest Voronoi diagram are unbounded, its (d-1)-skeleton is connected, and it does not have tunnels. A d-cell of the Voronoi diagram is called a tunnel if the set of its unbounded directions, represented as points on its Gaussian map, is not connected. In a three-dimensional space, the farthest Voronoi diagram of lines has exactly n^2-n three-dimensional cells, when n >= 2. The Gaussian map of the farthest Voronoi diagram of line segments or lines can be constructed in O(n^(d-1) alpha(n)) time, while if d=3, the time drops to worst-case optimal O(n^2).
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.62/LIPIcs.ISAAC.2019.62.pdf
Voronoi diagram
lines
line segments
higher-order
order-k
unbounded
hypersphere arrangement
great hyperspheres
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
63:1
63:16
10.4230/LIPIcs.ISAAC.2019.63
article
Neighborhood Inclusions for Minimal Dominating Sets Enumeration: Linear and Polynomial Delay Algorithms in P_7 - Free and P_8 - Free Chordal Graphs
Defrain, Oscar
1
2
Nourine, Lhouari
1
3
Université Clermont Auvergne, France
oscar.defrain@uca.fr
lhouari.nourine@uca.fr
In [M. M. Kanté, V. Limouzy, A. Mary, and L. Nourine. On the enumeration of minimal dominating sets and related notions. SIAM Journal on Discrete Mathematics, 28(4):1916–1929, 2014.] the authors give an O(n+m) delay algorithm based on neighborhood inclusions for the enumeration of minimal dominating sets in split and P_6-free chordal graphs. In this paper, we investigate generalizations of this technique to P_k-free chordal graphs for larger integers k. In particular, we give O(n+m) and O(n^3 * m) delays algorithms in the classes of P_7-free and P_8-free chordal graphs. As for P_k-free chordal graphs for k >= 9, we give evidence that such a technique is inefficient as a key step of the algorithm, namely the irredundant extension problem, becomes NP-complete.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.63/LIPIcs.ISAAC.2019.63.pdf
Minimal dominating sets
enumeration algorithms
linear delay enumeration
chordal graphs
forbidden induced paths
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2019-11-28
149
64:1
64:18
10.4230/LIPIcs.ISAAC.2019.64
article
Dual-Mode Greedy Algorithms Can Save Energy
Geissmann, Barbara
1
https://orcid.org/0000-0002-9236-8798
Leucci, Stefano
2
https://orcid.org/0000-0002-8848-7006
Liu, Chih-Hung
1
https://orcid.org/0000-0001-9683-5982
Penna, Paolo
1
https://orcid.org/0000-0002-5959-2421
Proietti, Guido
3
4
https://orcid.org/0000-0003-1009-5552
Department of Computer Science, ETH Zürich, Switzerland
Department of Algorithms and Complexity, Max Planck Institute for Informatics, Germany
Dipartimento di Ingegneria e Scienze dell'Informazione e Matematica, Università dell'Aquila, Italy
Istituto di Analisi dei Sistemi ed Informatica "A. Ruberti", CNR, Roma, Italy
In real world applications, important resources like energy are saved by deliberately using so-called low-cost operations that are less reliable. Some of these approaches are based on a dual mode technology where it is possible to choose between high-energy operations (always correct) and low-energy operations (prone to errors), and thus enable to trade energy for correctness.
In this work we initiate the study of algorithms for solving optimization problems that in their computation are allowed to choose between two types of operations: high-energy comparisons (always correct but expensive) and low-energy comparisons (cheaper but prone to errors). For the errors in low-energy comparisons, we assume the persistent setting, which usually makes it impossible to achieve optimal solutions without high-energy comparisons. We propose to study a natural complexity measure which accounts for the number of operations of either type separately.
We provide a new family of algorithms which, for a fairly large class of maximization problems, return a constant approximation using only polylogarithmic many high-energy comparisons and only O(n log n) low-energy comparisons. This result applies to the class of p-extendible system s [Mestre, 2006], which includes several NP-hard problems and matroids as a special case (p=1).
These algorithmic solutions relate to some fundamental aspects studied earlier in different contexts: (i) the approximation guarantee when only ordinal information is available to the algorithm; (ii) the fact that even such ordinal information may be erroneous because of low-energy comparisons and (iii) the ability to approximately sort a sequence of elements when comparisons are subject to persistent errors. Finally, our main result is quite general and can be parametrized and adapted to other error models.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol149-isaac2019/LIPIcs.ISAAC.2019.64/LIPIcs.ISAAC.2019.64.pdf
matroids
p-extendible systems
greedy algorithm
approximation algorithms
high-low energy