eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Open Access Series in Informatics
2190-6807
2019-01-08
69
0
0
10.4230/OASIcs.SOSA.2019
article
OASIcs, Volume 69, SOSA'19, Complete Volume
Fineman, Jeremy T.
Mitzenmacher, Michael
OASIcs, Volume 69, SOSA'19, Complete Volume
https://drops.dagstuhl.de/storage/01oasics/oasics-vol069-sosa2019/OASIcs.SOSA.2019/OASIcs.SOSA.2019.pdf
Theory of computation, Design and analysis of algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Open Access Series in Informatics
2190-6807
2019-01-08
69
0:i
0:x
10.4230/OASIcs.SOSA.2019.0
article
Front Matter, Table of Contents, Preface, Conference Organization
Fineman, Jeremy T.
Mitzenmacher, Michael
Front Matter, Table of Contents, Preface, Conference Organization
https://drops.dagstuhl.de/storage/01oasics/oasics-vol069-sosa2019/OASIcs.SOSA.2019.0/OASIcs.SOSA.2019.0.pdf
Front Matter
Table of Contents
Preface
Conference Organization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Open Access Series in Informatics
2190-6807
2019-01-08
69
1:1
1:18
10.4230/OASIcs.SOSA.2019.1
article
Isotonic Regression by Dynamic Programming
Rote, Günter
For a given sequence of numbers, we want to find a monotonically increasing sequence of the same length that best approximates it in the sense of minimizing the weighted sum of absolute values of the differences. A conceptually easy dynamic programming approach leads to an algorithm with running time O(n log n). While other algorithms with the same running time are known, our algorithm is very simple. The only auxiliary data structure that it requires is a priority queue. The approach extends to other error measures.
https://drops.dagstuhl.de/storage/01oasics/oasics-vol069-sosa2019/OASIcs.SOSA.2019.1/OASIcs.SOSA.2019.1.pdf
Convex functions
dynamic programming
convex hull
isotonic regression
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Open Access Series in Informatics
2190-6807
2019-01-08
69
2:1
2:11
10.4230/OASIcs.SOSA.2019.2
article
An Illuminating Algorithm for the Light Bulb Problem
Alman, Josh
The Light Bulb Problem is one of the most basic problems in data analysis. One is given as input n vectors in {-1,1}^d, which are all independently and uniformly random, except for a planted pair of vectors with inner product at least rho * d for some constant rho > 0. The task is to find the planted pair. The most straightforward algorithm leads to a runtime of Omega(n^2). Algorithms based on techniques like Locality-Sensitive Hashing achieve runtimes of n^{2 - O(rho)}; as rho gets small, these approach quadratic.
Building on prior work, we give a new algorithm for this problem which runs in time O(n^{1.582} + nd), regardless of how small rho is. This matches the best known runtime due to Karppa et al. Our algorithm combines techniques from previous work on the Light Bulb Problem with the so-called `polynomial method in algorithm design,' and has a simpler analysis than previous work. Our algorithm is also easily derandomized, leading to a deterministic algorithm for the Light Bulb Problem with the same runtime of O(n^{1.582} + nd), improving previous results.
https://drops.dagstuhl.de/storage/01oasics/oasics-vol069-sosa2019/OASIcs.SOSA.2019.2/OASIcs.SOSA.2019.2.pdf
Light Bulb Problem
Polynomial Method
Finding Correlations
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Open Access Series in Informatics
2190-6807
2019-01-08
69
3:1
3:20
10.4230/OASIcs.SOSA.2019.3
article
Simple Concurrent Labeling Algorithms for Connected Components
Liu, Sixue
Tarjan, Robert E.
We present new concurrent labeling algorithms for finding connected components, and we study their theoretical efficiency. Even though many such algorithms have been proposed and many experiments with them have been done, our algorithms are simpler. We obtain an O(lg n) step bound for two of our algorithms using a novel multi-round analysis. We conjecture that our other algorithms also take O(lg n) steps but are only able to prove an O(lg^2 n) bound. We also point out some gaps in previous analyses of similar algorithms. Our results show that even a basic problem like connected components still has secrets to reveal.
https://drops.dagstuhl.de/storage/01oasics/oasics-vol069-sosa2019/OASIcs.SOSA.2019.3/OASIcs.SOSA.2019.3.pdf
Connected Components
Concurrent Algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Open Access Series in Informatics
2190-6807
2019-01-08
69
4:1
4:17
10.4230/OASIcs.SOSA.2019.4
article
A Framework for Searching in Graphs in the Presence of Errors
Dereniowski, Dariusz
Tiegel, Stefan
Uznanski, Przemyslaw
Wolleb-Graf, Daniel
We consider a problem of searching for an unknown target vertex t in a (possibly edge-weighted) graph. Each vertex-query points to a vertex v and the response either admits that v is the target or provides any neighbor s of v that lies on a shortest path from v to t. This model has been introduced for trees by Onak and Parys [FOCS 2006] and for general graphs by Emamjomeh-Zadeh et al. [STOC 2016]. In the latter, the authors provide algorithms for the error-less case and for the independent noise model (where each query independently receives an erroneous answer with known probability p<1/2 and a correct one with probability 1-p).
We study this problem both with adversarial errors and independent noise models. First, we show an algorithm that needs at most (log_2 n)/(1 - H(r)) queries in case of adversarial errors, where the adversary is bounded with its rate of errors by a known constant r<1/2. Our algorithm is in fact a simplification of previous work, and our refinement lies in invoking an amortization argument. We then show that our algorithm coupled with a Chernoff bound argument leads to a simpler algorithm for the independent noise model and has a query complexity that is both simpler and asymptotically better than the one of Emamjomeh-Zadeh et al. [STOC 2016].
Our approach has a wide range of applications. First, it improves and simplifies the Robust Interactive Learning framework proposed by Emamjomeh-Zadeh and Kempe [NIPS 2017]. Secondly, performing analogous analysis for edge-queries (where a query to an edge e returns its endpoint that is closer to the target) we actually recover (as a special case) a noisy binary search algorithm that is asymptotically optimal, matching the complexity of Feige et al. [SIAM J. Comput. 1994]. Thirdly, we improve and simplify upon an algorithm for searching of unbounded domains due to Aslam and Dhagat [STOC 1991].
https://drops.dagstuhl.de/storage/01oasics/oasics-vol069-sosa2019/OASIcs.SOSA.2019.4/OASIcs.SOSA.2019.4.pdf
graph algorithms
noisy binary search
query complexity
reliability
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Open Access Series in Informatics
2190-6807
2019-01-08
69
5:1
5:21
10.4230/OASIcs.SOSA.2019.5
article
Selection from Heaps, Row-Sorted Matrices, and X+Y Using Soft Heaps
Kaplan, Haim
Kozma, László
Zamir, Or
Zwick, Uri
We use soft heaps to obtain simpler optimal algorithms for selecting the k-th smallest item, and the set of k smallest items, from a heap-ordered tree, from a collection of sorted lists, and from X+Y, where X and Y are two unsorted sets. Our results match, and in some ways extend and improve, classical results of Frederickson (1993) and Frederickson and Johnson (1982). In particular, for selecting the k-th smallest item, or the set of k smallest items, from a collection of m sorted lists we obtain a new optimal "output-sensitive" algorithm that performs only O(m + sum_{i=1}^m log(k_i+1)) comparisons, where k_i is the number of items of the i-th list that belong to the overall set of k smallest items.
https://drops.dagstuhl.de/storage/01oasics/oasics-vol069-sosa2019/OASIcs.SOSA.2019.5/OASIcs.SOSA.2019.5.pdf
selection
soft heap
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Open Access Series in Informatics
2190-6807
2019-01-08
69
6:1
6:9
10.4230/OASIcs.SOSA.2019.6
article
Approximating Optimal Transport With Linear Programs
Quanrud, Kent
In the regime of bounded transportation costs, additive approximations for the optimal transport problem are reduced (rather simply) to relative approximations for positive linear programs, resulting in faster additive approximation algorithms for optimal transport.
https://drops.dagstuhl.de/storage/01oasics/oasics-vol069-sosa2019/OASIcs.SOSA.2019.6/OASIcs.SOSA.2019.6.pdf
optimal transport
fast approximations
linear programming
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Open Access Series in Informatics
2190-6807
2019-01-08
69
7:1
7:18
10.4230/OASIcs.SOSA.2019.7
article
LP Relaxation and Tree Packing for Minimum k-cuts
Chekuri, Chandra
Quanrud, Kent
Xu, Chao
Karger used spanning tree packings [Karger, 2000] to derive a near linear-time randomized algorithm for the global minimum cut problem as well as a bound on the number of approximate minimum cuts. This is a different approach from his well-known random contraction algorithm [Karger, 1995; Karger and Stein, 1996]. Thorup developed a fast deterministic algorithm for the minimum k-cut problem via greedy recursive tree packings [Thorup, 2008].
In this paper we revisit properties of an LP relaxation for k-cut proposed by Naor and Rabani [Naor and Rabani, 2001], and analyzed in [Chekuri et al., 2006]. We show that the dual of the LP yields a tree packing, that when combined with an upper bound on the integrality gap for the LP, easily and transparently extends Karger's analysis for mincut to the k-cut problem. In addition to the simplicity of the algorithm and its analysis, this allows us to improve the running time of Thorup's algorithm by a factor of n. We also improve the bound on the number of alpha-approximate k-cuts. Second, we give a simple proof that the integrality gap of the LP is 2(1-1/n). Third, we show that an optimum solution to the LP relaxation, for all values of k, is fully determined by the principal sequence of partitions of the input graph. This allows us to relate the LP relaxation to the Lagrangean relaxation approach of Barahona [Barahona, 2000] and Ravi and Sinha [Ravi and Sinha, 2008]; it also shows that the idealized recursive tree packing considered by Thorup gives an optimum dual solution to the LP. This work arose from an effort to understand and simplify the results of Thorup [Thorup, 2008].
https://drops.dagstuhl.de/storage/01oasics/oasics-vol069-sosa2019/OASIcs.SOSA.2019.7/OASIcs.SOSA.2019.7.pdf
k-cut
LP relaxation
tree packing
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Open Access Series in Informatics
2190-6807
2019-01-08
69
8:1
8:18
10.4230/OASIcs.SOSA.2019.8
article
On Primal-Dual Circle Representations
Felsner, Stefan
Rote, Günter
The Koebe-Andreev-Thurston Circle Packing Theorem states that every triangulated planar graph has a contact representation by circles. The theorem has been generalized in various ways. The most prominent generalization assures the existence of a primal-dual circle representation for every 3-connected planar graph. We present a simple and elegant elementary proof of this result.
https://drops.dagstuhl.de/storage/01oasics/oasics-vol069-sosa2019/OASIcs.SOSA.2019.8/OASIcs.SOSA.2019.8.pdf
Disk packing
planar graphs
contact representation
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Open Access Series in Informatics
2190-6807
2019-01-08
69
9:1
9:14
10.4230/OASIcs.SOSA.2019.9
article
Asymmetric Convex Intersection Testing
Barba, Luis
Mulzer, Wolfgang
We consider asymmetric convex intersection testing (ACIT).
Let P subset R^d be a set of n points and H a set of n halfspaces in d dimensions. We denote by {ch(P)} the polytope obtained by taking the convex hull of P, and by {fh(H)} the polytope obtained by taking the intersection of the halfspaces in H. Our goal is to decide whether the intersection of H and the convex hull of P are disjoint. Even though ACIT is a natural variant of classic LP-type problems that have been studied at length in the literature, and despite its applications in the analysis of high-dimensional data sets, it appears that the problem has not been studied before.
We discuss how known approaches can be used to attack the ACIT problem, and we provide a very simple strategy that leads to a deterministic algorithm, linear on n and m, whose running time depends reasonably on the dimension d.
https://drops.dagstuhl.de/storage/01oasics/oasics-vol069-sosa2019/OASIcs.SOSA.2019.9/OASIcs.SOSA.2019.9.pdf
polytope intersection
LP-type problem
randomized algorithm
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Open Access Series in Informatics
2190-6807
2019-01-08
69
10:1
10:14
10.4230/OASIcs.SOSA.2019.10
article
Relaxed Voronoi: A Simple Framework for Terminal-Clustering Problems
Filtser, Arnold
Krauthgamer, Robert
Trabelsi, Ohad
We reprove three known algorithmic bounds for terminal-clustering problems, using a single framework that leads to simpler proofs. In this genre of problems, the input is a metric space (X,d) (possibly arising from a graph) and a subset of terminals K subset X, and the goal is to partition the points X such that each part, called a cluster, contains exactly one terminal (possibly with connectivity requirements) so as to minimize some objective. The three bounds we reprove are for Steiner Point Removal on trees [Gupta, SODA 2001], for Metric 0-Extension in bounded doubling dimension [Lee and Naor, unpublished 2003], and for Connected Metric 0-Extension [Englert et al., SICOMP 2014].
A natural approach is to cluster each point with its closest terminal, which would partition X into so-called Voronoi cells, but this approach can fail miserably due to its stringent cluster boundaries. A now-standard fix, which we call the Relaxed-Voronoi framework, is to use enlarged Voronoi cells, but to obtain disjoint clusters, the cells are computed greedily according to some order. This method, first proposed by Calinescu, Karloff and Rabani [SICOMP 2004], was employed successfully to provide state-of-the-art results for terminal-clustering problems on general metrics. However, for restricted families of metrics, e.g., trees and doubling metrics, only more complicated, ad-hoc algorithms are known. Our main contribution is to demonstrate that the Relaxed-Voronoi algorithm is applicable to restricted metrics, and actually leads to relatively simple algorithms and analyses.
https://drops.dagstuhl.de/storage/01oasics/oasics-vol069-sosa2019/OASIcs.SOSA.2019.10/OASIcs.SOSA.2019.10.pdf
Clustering
Steiner point removal
Zero extension
Doubling dimension
Relaxed voronoi
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Open Access Series in Informatics
2190-6807
2019-01-08
69
11:1
11:20
10.4230/OASIcs.SOSA.2019.11
article
Towards a Unified Theory of Sparsification for Matching Problems
Assadi, Sepehr
Bernstein, Aaron
In this paper, we present a construction of a "matching sparsifier", that is, a sparse subgraph of the given graph that preserves large matchings approximately and is robust to modifications of the graph. We use this matching sparsifier to obtain several new algorithmic results for the maximum matching problem:
- An almost (3/2)-approximation one-way communication protocol for the maximum matching problem, significantly simplifying the (3/2)-approximation protocol of Goel, Kapralov, and Khanna (SODA 2012) and extending it from bipartite graphs to general graphs.
- An almost (3/2)-approximation algorithm for the stochastic matching problem, improving upon and significantly simplifying the previous 1.999-approximation algorithm of Assadi, Khanna, and Li (EC 2017).
- An almost (3/2)-approximation algorithm for the fault-tolerant matching problem, which, to our knowledge, is the first non-trivial algorithm for this problem.
Our matching sparsifier is obtained by proving new properties of the edge-degree constrained subgraph (EDCS) of Bernstein and Stein (ICALP 2015; SODA 2016) - designed in the context of maintaining matchings in dynamic graphs - that identifies EDCS as an excellent choice for a matching sparsifier. This leads to surprisingly simple and non-technical proofs of the above results in a unified way. Along the way, we also provide a much simpler proof of the fact that an EDCS is guaranteed to contain a large matching, which may be of independent interest.
https://drops.dagstuhl.de/storage/01oasics/oasics-vol069-sosa2019/OASIcs.SOSA.2019.11/OASIcs.SOSA.2019.11.pdf
Maximum matching
matching sparsifiers
one-way communication complexity
stochastic matching
fault-tolerant matching
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Open Access Series in Informatics
2190-6807
2019-01-08
69
12:1
12:7
10.4230/OASIcs.SOSA.2019.12
article
A New Application of Orthogonal Range Searching for Computing Giant Graph Diameters
Ducoffe, Guillaume
A well-known problem for which it is difficult to improve the textbook algorithm is computing the graph diameter. We present two versions of a simple algorithm (one being Monte Carlo and the other deterministic) that for every fixed h and unweighted undirected graph G with n vertices and m edges, either correctly concludes that diam(G) < hn or outputs diam(G), in time O(m+n^{1+o(1)}). The algorithm combines a simple randomized strategy for this problem (Damaschke, IWOCA'16) with a popular framework for computing graph distances that is based on range trees (Cabello and Knauer, Computational Geometry'09). We also prove that under the Strong Exponential Time Hypothesis (SETH), we cannot compute the diameter of a given n-vertex graph in truly subquadratic time, even if the diameter is an Theta(n/log{n}).
https://drops.dagstuhl.de/storage/01oasics/oasics-vol069-sosa2019/OASIcs.SOSA.2019.12/OASIcs.SOSA.2019.12.pdf
Graph diameter
Orthogonal Range Queries
Hardness in P
FPT in P
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Open Access Series in Informatics
2190-6807
2019-01-08
69
13:1
13:8
10.4230/OASIcs.SOSA.2019.13
article
Simplified and Space-Optimal Semi-Streaming (2+epsilon)-Approximate Matching
Ghaffari, Mohsen
Wajc, David
In a recent breakthrough, Paz and Schwartzman (SODA'17) presented a single-pass (2+epsilon)-approximation algorithm for the maximum weight matching problem in the semi-streaming model. Their algorithm uses O(n log^2 n) bits of space, for any constant epsilon>0.
We present a simplified and more intuitive primal-dual analysis, for essentially the same algorithm, which also improves the space complexity to the optimal bound of O(n log n) bits - this is optimal as the output matching requires Omega(n log n) bits.
https://drops.dagstuhl.de/storage/01oasics/oasics-vol069-sosa2019/OASIcs.SOSA.2019.13/OASIcs.SOSA.2019.13.pdf
Streaming
Semi-Streaming
Space-Optimal
Matching
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Open Access Series in Informatics
2190-6807
2019-01-08
69
14:1
14:9
10.4230/OASIcs.SOSA.2019.14
article
Simple Greedy 2-Approximation Algorithm for the Maximum Genus of a Graph
Kotrbcík, Michal
Skoviera, Martin
The maximum genus gamma_M(G) of a graph G is the largest genus of an orientable surface into which G has a cellular embedding. Combinatorially, it coincides with the maximum number of disjoint pairs of adjacent edges of G whose removal results in a connected spanning subgraph of G. In this paper we describe a greedy 2-approximation algorithm for maximum genus by proving that removing pairs of adjacent edges from G arbitrarily while retaining connectedness leads to at least gamma_M(G)/2 pairs of edges removed. As a consequence of our approach we also obtain a 2-approximate counterpart of Xuong's combinatorial characterisation of maximum genus.
https://drops.dagstuhl.de/storage/01oasics/oasics-vol069-sosa2019/OASIcs.SOSA.2019.14/OASIcs.SOSA.2019.14.pdf
maximum genus
embedding
graph
greedy algorithm
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Open Access Series in Informatics
2190-6807
2019-01-08
69
15:1
15:21
10.4230/OASIcs.SOSA.2019.15
article
A Note on Max k-Vertex Cover: Faster FPT-AS, Smaller Approximate Kernel and Improved Approximation
Manurangsi, Pasin
In Maximum k-Vertex Cover (Max k-VC), the input is an edge-weighted graph G and an integer k, and the goal is to find a subset S of k vertices that maximizes the total weight of edges covered by S. Here we say that an edge is covered by S iff at least one of its endpoints lies in S.
We present an FPT approximation scheme (FPT-AS) that runs in (1/epsilon)^{O(k)} poly(n) time for the problem, which improves upon Gupta, Lee and Li's (k/epsilon)^{O(k)} poly(n)-time FPT-AS [Anupam Gupta and, 2018; Anupam Gupta et al., 2018]. Our algorithm is simple: just use brute force to find the best k-vertex subset among the O(k/epsilon) vertices with maximum weighted degrees.
Our algorithm naturally yields an (efficient) approximate kernelization scheme of O(k/epsilon) vertices; previously, an O(k^5/epsilon^2)-vertex approximate kernel is only known for the unweighted version of Max k-VC [Daniel Lokshtanov and, 2017]. Interestingly, this also has an application outside of parameterized complexity: using our approximate kernelization as a preprocessing step, we can directly apply Raghavendra and Tan's SDP-based algorithm for 2SAT with cardinality constraint [Prasad Raghavendra and, 2012] to give an 0.92-approximation algorithm for Max k-VC in polynomial time. This improves upon the best known polynomial time approximation algorithm of Feige and Langberg [Uriel Feige and, 2001] which yields (0.75 + delta)-approximation for some (small and unspecified) constant delta > 0.
We also consider the minimization version of the problem (called Min k-VC), where the goal is to find a set S of k vertices that minimizes the total weight of edges covered by S. We provide a FPT-AS for Min k-VC with similar running time of (1/epsilon)^{O(k)} poly(n). Once again, this improves on a (k/epsilon)^{O(k)} poly(n)-time FPT-AS of Gupta et al. On the other hand, we show, assuming a variant of the Small Set Expansion Hypothesis [Raghavendra and Steurer, 2010] and NP !subseteq coNP/poly, that there is no polynomial size approximate kernelization for Min k-VC for any factor less than two.
https://drops.dagstuhl.de/storage/01oasics/oasics-vol069-sosa2019/OASIcs.SOSA.2019.15/OASIcs.SOSA.2019.15.pdf
Maximum k-Vertex Cover
Minimum k-Vertex Cover
Approximation Algorithms
Fixed Parameter Algorithms
Approximate Kernelization
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Open Access Series in Informatics
2190-6807
2019-01-08
69
16:1
16:16
10.4230/OASIcs.SOSA.2019.16
article
Simple Contention Resolution via Multiplicative Weight Updates
Chang, Yi-Jun
Jin, Wenyu
Pettie, Seth
We consider the classic contention resolution problem, in which devices conspire to share some common resource, for which they each need temporary and exclusive access. To ground the discussion, suppose (identical) devices wake up at various times, and must send a single packet over a shared multiple-access channel. In each time step they may attempt to send their packet; they receive ternary feedback {0,1,2^+} from the channel, 0 indicating silence (no one attempted transmission), 1 indicating success (one device successfully transmitted), and 2^+ indicating noise. We prove that a simple strategy suffices to achieve a channel utilization rate of 1/e-O(epsilon), for any epsilon>0. In each step, device i attempts to send its packet with probability p_i, then applies a rudimentary multiplicative weight-type update to p_i.
p_i <- { p_i * e^{epsilon} upon hearing silence (0), p_i upon hearing success (1), p_i * e^{-epsilon/(e-2)} upon hearing noise (2^+) }.
This scheme works well even if the introduction of devices/packets is adversarial, and even if the adversary can jam time slots (make noise) at will. We prove that if the adversary jams J time slots, then this scheme will achieve channel utilization 1/e-epsilon, excluding O(J) wasted slots. Results similar to these (Bender, Fineman, Gilbert, Young, SODA 2016) were already achieved, but with a lower constant efficiency (less than 0.05) and a more complex algorithm.
https://drops.dagstuhl.de/storage/01oasics/oasics-vol069-sosa2019/OASIcs.SOSA.2019.16/OASIcs.SOSA.2019.16.pdf
Contention resolution
multiplicative weight update method
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Open Access Series in Informatics
2190-6807
2019-01-08
69
17:1
17:6
10.4230/OASIcs.SOSA.2019.17
article
A Simple Near-Linear Pseudopolynomial Time Randomized Algorithm for Subset Sum
Jin, Ce
Wu, Hongxun
Given a multiset S of n positive integers and a target integer t, the Subset Sum problem asks to determine whether there exists a subset of S that sums up to t. The current best deterministic algorithm, by Koiliaris and Xu [SODA'17], runs in O~(sqrt{n}t) time, where O~ hides poly-logarithm factors. Bringmann [SODA'17] later gave a randomized O~(n + t) time algorithm using two-stage color-coding. The O~(n+t) running time is believed to be near-optimal.
In this paper, we present a simple and elegant randomized algorithm for Subset Sum in O~(n + t) time. Our new algorithm actually solves its counting version modulo prime p>t, by manipulating generating functions using FFT.
https://drops.dagstuhl.de/storage/01oasics/oasics-vol069-sosa2019/OASIcs.SOSA.2019.17/OASIcs.SOSA.2019.17.pdf
subset sum
formal power series
FFT
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Open Access Series in Informatics
2190-6807
2019-01-08
69
18:1
18:10
10.4230/OASIcs.SOSA.2019.18
article
Submodular Optimization in the MapReduce Model
Liu, Paul
Vondrak, Jan
Submodular optimization has received significant attention in both practice and theory, as a wide array of problems in machine learning, auction theory, and combinatorial optimization have submodular structure. In practice, these problems often involve large amounts of data, and must be solved in a distributed way. One popular framework for running such distributed algorithms is MapReduce. In this paper, we present two simple algorithms for cardinality constrained submodular optimization in the MapReduce model: the first is a (1/2-o(1))-approximation in 2 MapReduce rounds, and the second is a (1-1/e-epsilon)-approximation in (1+o(1))/epsilon MapReduce rounds.
https://drops.dagstuhl.de/storage/01oasics/oasics-vol069-sosa2019/OASIcs.SOSA.2019.18/OASIcs.SOSA.2019.18.pdf
mapreduce
submodular
optimization
approximation algorithms
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Open Access Series in Informatics
2190-6807
2019-01-08
69
19:1
19:19
10.4230/OASIcs.SOSA.2019.19
article
Compressed Sensing with Adversarial Sparse Noise via L1 Regression
Karmalkar, Sushrut
Price, Eric
We present a simple and effective algorithm for the problem of sparse robust linear regression. In this problem, one would like to estimate a sparse vector w^* in R^n from linear measurements corrupted by sparse noise that can arbitrarily change an adversarially chosen eta fraction of measured responses y, as well as introduce bounded norm noise to the responses.
For Gaussian measurements, we show that a simple algorithm based on L1 regression can successfully estimate w^* for any eta < eta_0 ~~ 0.239, and that this threshold is tight for the algorithm. The number of measurements required by the algorithm is O(k log n/k) for k-sparse estimation, which is within constant factors of the number needed without any sparse noise.
Of the three properties we show - the ability to estimate sparse, as well as dense, w^*; the tolerance of a large constant fraction of outliers; and tolerance of adversarial rather than distributional (e.g., Gaussian) dense noise - to the best of our knowledge, no previous polynomial time algorithm was known to achieve more than two.
https://drops.dagstuhl.de/storage/01oasics/oasics-vol069-sosa2019/OASIcs.SOSA.2019.19/OASIcs.SOSA.2019.19.pdf
Robust Regression
Compressed Sensing
eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Open Access Series in Informatics
2190-6807
2019-01-08
69
20:1
20:11
10.4230/OASIcs.SOSA.2019.20
article
Approximating Maximin Share Allocations
Garg, Jugal
McGlaughlin, Peter
Taki, Setareh
We study the problem of fair allocation of M indivisible items among N agents using the popular notion of maximin share as our measure of fairness. The maximin share of an agent is the largest value she can guarantee herself if she is allowed to choose a partition of the items into N bundles (one for each agent), on the condition that she receives her least preferred bundle. A maximin share allocation provides each agent a bundle worth at least their maximin share. While it is known that such an allocation need not exist [Procaccia and Wang, 2014; Kurokawa et al., 2016], a series of work [Procaccia and Wang, 2014; David Kurokawa et al., 2018; Amanatidis et al., 2017; Barman and Krishna Murthy, 2017] provided 2/3 approximation algorithms in which each agent receives a bundle worth at least 2/3 times their maximin share. Recently, [Ghodsi et al., 2018] improved the approximation guarantee to 3/4. Prior works utilize intricate algorithms, with an exception of [Barman and Krishna Murthy, 2017] which is a simple greedy solution but relies on sophisticated analysis techniques. In this paper, we propose an alternative 2/3 maximin share approximation which offers both a simple algorithm and straightforward analysis. In contrast to other algorithms, our approach allows for a simple and intuitive understanding of why it works.
https://drops.dagstuhl.de/storage/01oasics/oasics-vol069-sosa2019/OASIcs.SOSA.2019.20/OASIcs.SOSA.2019.20.pdf
Fair division
Maximin share
Approximation algorithm