Search Results

Documents authored by Kaplan, Haim


Document
Segment Proximity Graphs and Nearest Neighbor Queries Amid Disjoint Segments

Authors: Pankaj K. Agarwal, Haim Kaplan, Matthew J. Katz, and Micha Sharir

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
In this paper we study a few proximity problems related to a set of pairwise-disjoint segments in {ℝ}². Let S be a set of n pairwise-disjoint segments in {ℝ}², and let r > 0 be a parameter. We define the segment proximity graph of S to be G_r(S) := (S,E), where E = {(e₁,e₂) ∣ dist(e₁,e₂) ≤ r} and dist (e₁,e₂) = min_{(p,q) ∈ e₁× e₂} ‖p-q‖ is the Euclidean distance between e₁ and e₂. We define the weight of an edge (e₁,e₂) ∈ E to be dist(e₁,e₂). We first present a simple grid-based O(nlog² n)-time algorithm for computing a BFS tree of G_r(S). We apply it to obtain an O^*(n^{6/5}) + O(nlog²nlogΔ)-time algorithm for the so-called reverse shortest path problem, in which we want to find the smallest value r^* for which G_{r^*}(S) contains a path of some specified length between two designated start and target segments (where the O^*(⋅) notation hides polylogarithmic factors). Here Δ = max_{e ≠ e' ∈ S} dist(e,e')/min_{e ≠ e' ∈ S} dist(e,e') is the spread of S. Next, we present a dynamic data structure that can maintain a set S of pairwise-disjoint segments in the plane under insertions/deletions, so that, for a query segment e from an unknown set Q of pairwise-disjoint segments, such that e does not intersect any segment in (the current version of) S, the segment of S closest to e can be computed in O(log⁵ n) amortized time. The amortized update time is also O(log⁵ n). We note that if the segments in S∪Q are allowed to intersect then the known lower bounds on halfplane range searching suggest that a sequence of n updates and queries may take at least close to Ω(n^{4/3}) time. One thus has to strongly rely on the non-intersecting property of S and Q to perform updates and queries in O(polylog(n)) (amortized) time each. Using these results on nearest-neighbor (NN) searching for disjoint segments, we show that a DFS tree (or forest) of G_r(S) can be computed in O^*(n) time. We also obtain an O^*(n)-time algorithm for constructing a minimum spanning tree of G_r(S). Finally, we present an O^*(n^{4/3})-time algorithm for computing a single-source shortest-path tree in G_r(S). This is the only result that does not exploit the disjointness of the input segments.

Cite as

Pankaj K. Agarwal, Haim Kaplan, Matthew J. Katz, and Micha Sharir. Segment Proximity Graphs and Nearest Neighbor Queries Amid Disjoint Segments. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 7:1-7:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{agarwal_et_al:LIPIcs.ESA.2024.7,
  author =	{Agarwal, Pankaj K. and Kaplan, Haim and Katz, Matthew J. and Sharir, Micha},
  title =	{{Segment Proximity Graphs and Nearest Neighbor Queries Amid Disjoint Segments}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{7:1--7:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.7},
  URN =		{urn:nbn:de:0030-drops-210782},
  doi =		{10.4230/LIPIcs.ESA.2024.7},
  annote =	{Keywords: segment proximity graphs, nearest neighbor searching, dynamic data structures, BFS, DFS, unit-disk graphs}
}
Document
Track A: Algorithms, Complexity and Games
Caching Connections in Matchings

Authors: Yaniv Sadeh and Haim Kaplan

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
Motivated by the desire to utilize a limited number of configurable optical switches by recent advances in Software Defined Networks (SDNs), we define an online problem which we call the Caching in Matchings problem. This problem has a natural combinatorial structure and therefore may find additional applications in theory and practice. In the Caching in Matchings problem our cache consists of k matchings of connections between servers that form a bipartite graph. To cache a connection we insert it into one of the k matchings possibly evicting at most two other connections from this matching. This problem resembles the problem known as Connection Caching [Cohen et al., 2000], where we also cache connections but our only restriction is that they form a graph with bounded degree k. Our results show a somewhat surprising qualitative separation between the problems: The competitive ratio of any online algorithm for caching in matchings must depend on the size of the graph. Specifically, we give a deterministic O(nk) competitive and randomized O(n log k) competitive algorithms for caching in matchings, where n is the number of servers and k is the number of matchings. We also show that the competitive ratio of any deterministic algorithm is Ω(max(n/k,k)) and of any randomized algorithm is Ω(log (n/(k² log k)) ⋅ log k). In particular, the lower bound for randomized algorithms is Ω(log n) regardless of k, and can be as high as Ω(log² n) if k = n^{1/3}, for example. We also show that if we allow the algorithm to use at least 2k-1 matchings compared to k used by the optimum then we match the competitive ratios of connection catching which are independent of n. Interestingly, we also show that even a single extra matching for the algorithm allows to get substantially better bounds.

Cite as

Yaniv Sadeh and Haim Kaplan. Caching Connections in Matchings. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 120:1-120:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{sadeh_et_al:LIPIcs.ICALP.2024.120,
  author =	{Sadeh, Yaniv and Kaplan, Haim},
  title =	{{Caching Connections in Matchings}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{120:1--120:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.120},
  URN =		{urn:nbn:de:0030-drops-202639},
  doi =		{10.4230/LIPIcs.ICALP.2024.120},
  annote =	{Keywords: Caching, Matchings, Caching in Matchings, Edge Coloring, Online Algorithms}
}
Document
Optimal Energetic Paths for Electric Cars

Authors: Dani Dorfman, Haim Kaplan, Robert E. Tarjan, and Uri Zwick

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
A weighted directed graph G = (V,A,c), where A ⊆ V× V and c:A → ℝ, naturally describes a road network in which an electric car, or vehicle (EV), can roam. An arc uv ∈ A models a road segment connecting the two vertices (junctions) u and v. The cost c(uv) of the arc uv is the amount of energy the car needs to travel from u to v. This amount can be positive, zero or negative. We consider both the more realistic scenario where there are no negative cycles in the graph, as well as the more challenging scenario, which can also be motivated, where negative cycles may be present. The electric car has a battery that can store up to B units of energy. The car can traverse an arc uv ∈ A only if it is at u and the charge b in its battery satisfies b ≥ c(uv). If the car traverses the arc uv then it reaches v with a charge of min{b-c(uv),B} in its battery. Arcs with a positive cost deplete the battery while arcs with negative costs may charge the battery, but not above its capacity of B. If the car is at a vertex u and cannot traverse any outgoing arcs of u, then it is stuck and cannot continue traveling. We consider the following natural problem: Given two vertices s,t ∈ V, can the car travel from s to t, starting at s with an initial charge b, where 0 ≤ b ≤ B? If so, what is the maximum charge with which the car can reach t? Equivalently, what is the smallest depletion δ_{B,b}(s,t) such that the car can reach t with a charge of b-δ_{B,b}(s,t) in its battery, and which path should the car follow to achieve this? We also refer to δ_{B,b}(s,t) as the energetic cost of traveling from s to t. We let δ_{B,b}(s,t) = ∞ if the car cannot travel from s to t starting with an initial charge of b. The problem of computing energetic costs is a strict generalization of the standard shortest paths problem. When there are no negative cycles, the single-source version of the problem can be solved using simple adaptations of the classical Bellman-Ford and Dijkstra algorithms. More involved algorithms are required when the graph may contain negative cycles.

Cite as

Dani Dorfman, Haim Kaplan, Robert E. Tarjan, and Uri Zwick. Optimal Energetic Paths for Electric Cars. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 42:1-42:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{dorfman_et_al:LIPIcs.ESA.2023.42,
  author =	{Dorfman, Dani and Kaplan, Haim and Tarjan, Robert E. and Zwick, Uri},
  title =	{{Optimal Energetic Paths for Electric Cars}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{42:1--42:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.42},
  URN =		{urn:nbn:de:0030-drops-186955},
  doi =		{10.4230/LIPIcs.ESA.2023.42},
  annote =	{Keywords: Electric cars, Optimal Paths, Battery depletion}
}
Document
The Unweighted and Weighted Reverse Shortest Path Problem for Disk Graphs

Authors: Haim Kaplan, Matthew J. Katz, Rachel Saban, and Micha Sharir

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
We study the reverse shortest path problem on disk graphs in the plane. In this problem we consider the proximity graph of a set of n disks in the plane of arbitrary radii: In this graph two disks are connected if the distance between them is at most some threshold parameter r. The case of intersection graphs is a special case with r = 0. We give an algorithm that, given a target length k, computes the smallest value of r for which there is a path of length at most k between some given pair of disks in the proximity graph. Our algorithm runs in O^*(n^{5/4}) randomized expected time, which improves to O^*(n^{6/5}) for unit disk graphs, where all the disks have the same radius. Our technique is robust and can be applied to many variants of the problem. One significant variant is the case of weighted proximity graphs, where edges are assigned real weights equal to the distance between the disks or between their centers, and k is replaced by a target weight w. In other variants, we want to optimize a parameter different from r, such as a scale factor of the radii of the disks. The main technique for the decision version of the problem (determining whether the graph with a given r has the desired property) is based on efficient implementations of BFS (for the unweighted case) and of Dijkstra’s algorithm (for the weighted case), using efficient data structures for maintaining the bichromatic closest pair for certain bicliques and several distance functions. The optimization problem is then solved by combining the resulting decision procedure with enhanced variants of the interval shrinking and bifurcation technique of [R. Ben Avraham et al., 2015].

Cite as

Haim Kaplan, Matthew J. Katz, Rachel Saban, and Micha Sharir. The Unweighted and Weighted Reverse Shortest Path Problem for Disk Graphs. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 67:1-67:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{kaplan_et_al:LIPIcs.ESA.2023.67,
  author =	{Kaplan, Haim and Katz, Matthew J. and Saban, Rachel and Sharir, Micha},
  title =	{{The Unweighted and Weighted Reverse Shortest Path Problem for Disk Graphs}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{67:1--67:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.67},
  URN =		{urn:nbn:de:0030-drops-187208},
  doi =		{10.4230/LIPIcs.ESA.2023.67},
  annote =	{Keywords: Computational geometry, geometric optimization, disk graphs, BFS, Dijkstra’s algorithm, reverse shortest path}
}
Document
Track A: Algorithms, Complexity and Games
Expander Decomposition with Fewer Inter-Cluster Edges Using a Spectral Cut Player

Authors: Daniel Agassy, Dani Dorfman, and Haim Kaplan

Published in: LIPIcs, Volume 261, 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)


Abstract
A (ϕ,ε)-expander decomposition of a graph G (with n vertices and m edges) is a partition of V into clusters V₁,…,V_k with conductance Φ(G[V_i]) ≥ ϕ, such that there are at most ε m inter-cluster edges. Such a decomposition plays a crucial role in many graph algorithms. We give a randomized Õ(m/ϕ) time algorithm for computing a (ϕ, ϕlog²n)-expander decomposition. This improves upon the (ϕ, ϕlog³n)-expander decomposition also obtained in Õ(m/ϕ) time by [Saranurak and Wang, SODA 2019] (SW) and brings the number of inter-cluster edges within logarithmic factor of optimal. One crucial component of SW’s algorithm is a non-stop version of the cut-matching game of [Khandekar, Rao, Vazirani, JACM 2009] (KRV): The cut player does not stop when it gets from the matching player an unbalanced sparse cut, but continues to play on a trimmed part of the large side. The crux of our improvement is the design of a non-stop version of the cleverer cut player of [Orecchia, Schulman, Vazirani, Vishnoi, STOC 2008] (OSVV). The cut player of OSSV uses a more sophisticated random walk, a subtle potential function, and spectral arguments. Designing and analysing a non-stop version of this game was an explicit open question asked by SW.

Cite as

Daniel Agassy, Dani Dorfman, and Haim Kaplan. Expander Decomposition with Fewer Inter-Cluster Edges Using a Spectral Cut Player. In 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 261, pp. 9:1-9:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{agassy_et_al:LIPIcs.ICALP.2023.9,
  author =	{Agassy, Daniel and Dorfman, Dani and Kaplan, Haim},
  title =	{{Expander Decomposition with Fewer Inter-Cluster Edges Using a Spectral Cut Player}},
  booktitle =	{50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)},
  pages =	{9:1--9:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-278-5},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{261},
  editor =	{Etessami, Kousha and Feige, Uriel and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2023.9},
  URN =		{urn:nbn:de:0030-drops-180619},
  doi =		{10.4230/LIPIcs.ICALP.2023.9},
  annote =	{Keywords: Exapander Decomposition, Cut-Matching Game}
}
Document
Track A: Algorithms, Complexity and Games
Fast Approximation of Search Trees on Trees with Centroid Trees

Authors: Benjamin Aram Berendsohn, Ishay Golinsky, Haim Kaplan, and László Kozma

Published in: LIPIcs, Volume 261, 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)


Abstract
Search trees on trees (STTs) generalize the fundamental binary search tree (BST) data structure: in STTs the underlying search space is an arbitrary tree, whereas in BSTs it is a path. An optimal BST of size n can be computed for a given distribution of queries in 𝒪(n²) time [Knuth, Acta Inf. 1971] and centroid BSTs provide a nearly-optimal alternative, computable in 𝒪(n) time [Mehlhorn, SICOMP 1977]. By contrast, optimal STTs are not known to be computable in polynomial time, and the fastest constant-approximation algorithm runs in 𝒪(n³) time [Berendsohn, Kozma, SODA 2022]. Centroid trees can be defined for STTs analogously to BSTs, and they have been used in a wide range of algorithmic applications. In the unweighted case (i.e., for a uniform distribution of queries), the centroid tree can be computed in 𝒪(n) time [Brodal, Fagerberg, Pedersen, Östlin, ICALP 2001; Della Giustina, Prezza, Venturini, SPIRE 2019]. These algorithms, however, do not readily extend to the weighted case. Moreover, no approximation guarantees were previously known for centroid trees in either the unweighted or weighted cases. In this paper we revisit centroid trees in a general, weighted setting, and we settle both the algorithmic complexity of constructing them, and the quality of their approximation. For constructing a weighted centroid tree, we give an output-sensitive 𝒪(n log h) ⊆ 𝒪(n log n) time algorithm, where h is the height of the resulting centroid tree. If the weights are of polynomial complexity, the running time is 𝒪(n log log n). We show these bounds to be optimal, in a general decision tree model of computation. For approximation, we prove that the cost of a centroid tree is at most twice the optimum, and this guarantee is best possible, both in the weighted and unweighted cases. We also give tight, fine-grained bounds on the approximation-ratio for bounded-degree trees and on the approximation-ratio of more general α-centroid trees.

Cite as

Benjamin Aram Berendsohn, Ishay Golinsky, Haim Kaplan, and László Kozma. Fast Approximation of Search Trees on Trees with Centroid Trees. In 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 261, pp. 19:1-19:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{berendsohn_et_al:LIPIcs.ICALP.2023.19,
  author =	{Berendsohn, Benjamin Aram and Golinsky, Ishay and Kaplan, Haim and Kozma, L\'{a}szl\'{o}},
  title =	{{Fast Approximation of Search Trees on Trees with Centroid Trees}},
  booktitle =	{50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)},
  pages =	{19:1--19:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-278-5},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{261},
  editor =	{Etessami, Kousha and Feige, Uriel and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2023.19},
  URN =		{urn:nbn:de:0030-drops-180711},
  doi =		{10.4230/LIPIcs.ICALP.2023.19},
  annote =	{Keywords: centroid tree, search trees on trees, approximation}
}
Document
Dynamic Binary Search Trees: Improved Lower Bounds for the Greedy-Future Algorithm

Authors: Yaniv Sadeh and Haim Kaplan

Published in: LIPIcs, Volume 254, 40th International Symposium on Theoretical Aspects of Computer Science (STACS 2023)


Abstract
Binary search trees (BSTs) are one of the most basic and widely used data structures. The best static tree for serving a sequence of queries (searches) can be computed by dynamic programming. In contrast, when the BSTs are allowed to be dynamic (i.e. change by rotations between searches), we still do not know how to compute the optimal algorithm (OPT) for a given sequence. One of the candidate algorithms whose serving cost is suspected to be optimal up-to a (multiplicative) constant factor is known by the name Greedy Future (GF). In an equivalent geometric way of representing queries on BSTs, GF is in fact equivalent to another algorithm called Geometric Greedy (GG). Most of the results on GF are obtained using the geometric model and the study of GG. Despite this intensive recent fruitful research, the best lower bound we have on the competitive ratio of GF is 4/3. Furthermore, it has been conjectured that the additive gap between the cost of GF and OPT is only linear in the number of queries. In this paper we prove a lower bound of 2 on the competitive ratio of GF, and we prove that the additive gap between the cost of GF and OPT can be Ω(m ⋅ log log n) where n is the number of items in the tree and m is the number of queries.

Cite as

Yaniv Sadeh and Haim Kaplan. Dynamic Binary Search Trees: Improved Lower Bounds for the Greedy-Future Algorithm. In 40th International Symposium on Theoretical Aspects of Computer Science (STACS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 254, pp. 53:1-53:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{sadeh_et_al:LIPIcs.STACS.2023.53,
  author =	{Sadeh, Yaniv and Kaplan, Haim},
  title =	{{Dynamic Binary Search Trees: Improved Lower Bounds for the Greedy-Future Algorithm}},
  booktitle =	{40th International Symposium on Theoretical Aspects of Computer Science (STACS 2023)},
  pages =	{53:1--53:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-266-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{254},
  editor =	{Berenbrink, Petra and Bouyer, Patricia and Dawar, Anuj and Kant\'{e}, Mamadou Moustapha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2023.53},
  URN =		{urn:nbn:de:0030-drops-177055},
  doi =		{10.4230/LIPIcs.STACS.2023.53},
  annote =	{Keywords: Binary Search Trees, Greedy Future, Geometric Greedy, Lower Bounds, Dynamic Optimality Conjecture}
}
Document
Dynamic Connectivity in Disk Graphs

Authors: Haim Kaplan, Alexander Kauer, Katharina Klost, Kristin Knorr, Wolfgang Mulzer, Liam Roditty, and Paul Seiferth

Published in: LIPIcs, Volume 224, 38th International Symposium on Computational Geometry (SoCG 2022)


Abstract
Let S ⊆ ℝ² be a set of n planar sites, such that each s ∈ S has an associated radius r_s > 0. Let 𝒟(S) be the disk intersection graph for S. It has vertex set S and an edge between two distinct sites s, t ∈ S if and only if the disks with centers s, t and radii r_s, r_t intersect. Our goal is to design data structures that maintain the connectivity structure of 𝒟(S) as sites are inserted and/or deleted. First, we consider unit disk graphs, i.e., r_s = 1, for all s ∈ S. We describe a data structure that has O(log² n) amortized update and O(log n/log log n) amortized query time. Second, we look at disk graphs with bounded radius ratio Ψ, i.e., for all s ∈ S, we have 1 ≤ r_s ≤ Ψ, for a Ψ ≥ 1 known in advance. In the fully dynamic case, we achieve amortized update time O(Ψ λ₆(log n) log⁷ n) and query time O(log n/log log n), where λ_s(n) is the maximum length of a Davenport-Schinzel sequence of order s on n symbols. In the incremental case, where only insertions are allowed, we get logarithmic dependency on Ψ, with O(α(n)) query time and O(logΨ λ₆(log n) log⁷ n) update time. For the decremental setting, where only deletions are allowed, we first develop an efficient disk revealing structure: given two sets R and B of disks, we can delete disks from R, and upon each deletion, we receive a list of all disks in B that no longer intersect the union of R. Using this, we get decremental data structures with amortized query time O(log n/log log n) that support m deletions in O((nlog⁵ n + m log⁷ n) λ₆(log n) + nlog Ψ log⁴n) overall time for bounded radius ratio Ψ and O((nlog⁶ n + m log⁸n) λ₆(log n)) for arbitrary radii.

Cite as

Haim Kaplan, Alexander Kauer, Katharina Klost, Kristin Knorr, Wolfgang Mulzer, Liam Roditty, and Paul Seiferth. Dynamic Connectivity in Disk Graphs. In 38th International Symposium on Computational Geometry (SoCG 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 224, pp. 49:1-49:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{kaplan_et_al:LIPIcs.SoCG.2022.49,
  author =	{Kaplan, Haim and Kauer, Alexander and Klost, Katharina and Knorr, Kristin and Mulzer, Wolfgang and Roditty, Liam and Seiferth, Paul},
  title =	{{Dynamic Connectivity in Disk Graphs}},
  booktitle =	{38th International Symposium on Computational Geometry (SoCG 2022)},
  pages =	{49:1--49:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-227-3},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{224},
  editor =	{Goaoc, Xavier and Kerber, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2022.49},
  URN =		{urn:nbn:de:0030-drops-160572},
  doi =		{10.4230/LIPIcs.SoCG.2022.49},
  annote =	{Keywords: Disk Graphs, Connectivity, Lower Envelopes}
}
Document
Locality Sensitive Hashing for Efficient Similar Polygon Retrieval

Authors: Haim Kaplan and Jay Tenenbaum

Published in: LIPIcs, Volume 187, 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)


Abstract
Locality Sensitive Hashing (LSH) is an effective method of indexing a set of items to support efficient nearest neighbors queries in high-dimensional spaces. The basic idea of LSH is that similar items should produce hash collisions with higher probability than dissimilar items. We study LSH for (not necessarily convex) polygons, and use it to give efficient data structures for similar shape retrieval. Arkin et al. [Arkin et al., 1991] represent polygons by their "turning function" - a function which follows the angle between the polygon’s tangent and the x-axis while traversing the perimeter of the polygon. They define the distance between polygons to be variations of the L_p (for p = 1,2) distance between their turning functions. This metric is invariant under translation, rotation and scaling (and the selection of the initial point on the perimeter) and therefore models well the intuitive notion of shape resemblance. We develop and analyze LSH near neighbor data structures for several variations of the L_p distance for functions (for p = 1,2). By applying our schemes to the turning functions of a collection of polygons we obtain efficient near neighbor LSH-based structures for polygons. To tune our structures to turning functions of polygons, we prove some new properties of these turning functions that may be of independent interest. As part of our analysis, we address the following problem which is of independent interest. Find the vertical translation of a function f that is closest in L₁ distance to a function g. We prove tight bounds on the approximation guarantee obtained by the translation which is equal to the difference between the averages of g and f.

Cite as

Haim Kaplan and Jay Tenenbaum. Locality Sensitive Hashing for Efficient Similar Polygon Retrieval. In 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 187, pp. 46:1-46:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{kaplan_et_al:LIPIcs.STACS.2021.46,
  author =	{Kaplan, Haim and Tenenbaum, Jay},
  title =	{{Locality Sensitive Hashing for Efficient Similar Polygon Retrieval}},
  booktitle =	{38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021)},
  pages =	{46:1--46:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-180-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{187},
  editor =	{Bl\"{a}ser, Markus and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2021.46},
  URN =		{urn:nbn:de:0030-drops-136910},
  doi =		{10.4230/LIPIcs.STACS.2021.46},
  annote =	{Keywords: Locality sensitive hashing, polygons, turning function, L\underlinep distance, nearest neighbors, similarity search}
}
Document
Locality Sensitive Hashing for Set-Queries, Motivated by Group Recommendations

Authors: Haim Kaplan and Jay Tenenbaum

Published in: LIPIcs, Volume 162, 17th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2020)


Abstract
Locality Sensitive Hashing (LSH) is an effective method to index a set of points such that we can efficiently find the nearest neighbors of a query point. We extend this method to our novel Set-query LSH (SLSH), such that it can find the nearest neighbors of a set of points, given as a query. Let s(x,y) be the similarity between two points x and y. We define a similarity between a set Q and a point x by aggregating the similarities s(p,x) for all p∈ Q. For example, we can take s(p,x) to be the angular similarity between p and x (i.e., 1-(∠(x,p)/π)), and aggregate by arithmetic or geometric averaging, or taking the lowest similarity. We develop locality sensitive hash families and data structures for a large set of such arithmetic and geometric averaging similarities, and analyze their collision probabilities. We also establish an analogous framework and hash families for distance functions. Specifically, we give a structure for the euclidean distance aggregated by either averaging or taking the maximum. We leverage SLSH to solve a geometric extension of the approximate near neighbors problem. In this version, we consider a metric for which the unit ball is an ellipsoid and its orientation is specified with the query. An important application that motivates our work is group recommendation systems. Such a system embeds movies and users in the same feature space, and the task of recommending a movie for a group to watch together, translates to a set-query Q using an appropriate similarity.

Cite as

Haim Kaplan and Jay Tenenbaum. Locality Sensitive Hashing for Set-Queries, Motivated by Group Recommendations. In 17th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 162, pp. 28:1-28:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{kaplan_et_al:LIPIcs.SWAT.2020.28,
  author =	{Kaplan, Haim and Tenenbaum, Jay},
  title =	{{Locality Sensitive Hashing for Set-Queries, Motivated by Group Recommendations}},
  booktitle =	{17th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2020)},
  pages =	{28:1--28:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-150-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{162},
  editor =	{Albers, Susanne},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SWAT.2020.28},
  URN =		{urn:nbn:de:0030-drops-122756},
  doi =		{10.4230/LIPIcs.SWAT.2020.28},
  annote =	{Keywords: Locality sensitive hashing, nearest neighbors, similarity search, group recommendations, distance functions, similarity functions, ellipsoid}
}
Document
How to Find a Point in the Convex Hull Privately

Authors: Haim Kaplan, Micha Sharir, and Uri Stemmer

Published in: LIPIcs, Volume 164, 36th International Symposium on Computational Geometry (SoCG 2020)


Abstract
We study the question of how to compute a point in the convex hull of an input set S of n points in ℝ^d in a differentially private manner. This question, which is trivial without privacy requirements, turns out to be quite deep when imposing differential privacy. In particular, it is known that the input points must reside on a fixed finite subset G ⊆ ℝ^d, and furthermore, the size of S must grow with the size of G. Previous works [Amos Beimel et al., 2010; Amos Beimel et al., 2019; Amos Beimel et al., 2013; Mark Bun et al., 2018; Mark Bun et al., 2015; Haim Kaplan et al., 2019] focused on understanding how n needs to grow with |G|, and showed that n=O(d^2.5 ⋅ 8^(log^*|G|)) suffices (so n does not have to grow significantly with |G|). However, the available constructions exhibit running time at least |G|^d², where typically |G|=X^d for some (large) discretization parameter X, so the running time is in fact Ω(X^d³). In this paper we give a differentially private algorithm that runs in O(n^d) time, assuming that n=Ω(d⁴ log X). To get this result we study and exploit some structural properties of the Tukey levels (the regions D_{≥ k} consisting of points whose Tukey depth is at least k, for k=0,1,…). In particular, we derive lower bounds on their volumes for point sets S in general position, and develop a rather subtle mechanism for handling point sets S in degenerate position (where the deep Tukey regions have zero volume). A naive approach to the construction of the Tukey regions requires n^O(d²) time. To reduce the cost to O(n^d), we use an approximation scheme for estimating the volumes of the Tukey regions (within their affine spans in case of degeneracy), and for sampling a point from such a region, a scheme that is based on the volume estimation framework of Lovász and Vempala [László Lovász and Santosh S. Vempala, 2006] and of Cousins and Vempala [Ben Cousins and Santosh S. Vempala, 2018]. Making this framework differentially private raises a set of technical challenges that we address.

Cite as

Haim Kaplan, Micha Sharir, and Uri Stemmer. How to Find a Point in the Convex Hull Privately. In 36th International Symposium on Computational Geometry (SoCG 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 164, pp. 52:1-52:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{kaplan_et_al:LIPIcs.SoCG.2020.52,
  author =	{Kaplan, Haim and Sharir, Micha and Stemmer, Uri},
  title =	{{How to Find a Point in the Convex Hull Privately}},
  booktitle =	{36th International Symposium on Computational Geometry (SoCG 2020)},
  pages =	{52:1--52:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-143-6},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{164},
  editor =	{Cabello, Sergio and Chen, Danny Z.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2020.52},
  URN =		{urn:nbn:de:0030-drops-122107},
  doi =		{10.4230/LIPIcs.SoCG.2020.52},
  annote =	{Keywords: Differential privacy, Tukey depth, Convex hull}
}
Document
Sample Complexity Bounds for Influence Maximization

Authors: Gal Sadeh, Edith Cohen, and Haim Kaplan

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
Influence maximization (IM) is the problem of finding for a given s ≥ 1 a set S of |S|=s nodes in a network with maximum influence. With stochastic diffusion models, the influence of a set S of seed nodes is defined as the expectation of its reachability over simulations, where each simulation specifies a deterministic reachability function. Two well-studied special cases are the Independent Cascade (IC) and the Linear Threshold (LT) models of Kempe, Kleinberg, and Tardos [Kempe et al., 2003]. The influence function in stochastic diffusion is unbiasedly estimated by averaging reachability values over i.i.d. simulations. We study the IM sample complexity: the number of simulations needed to determine a (1-ε)-approximate maximizer with confidence 1-δ. Our main result is a surprising upper bound of O(s τ ε^{-2} ln (n/δ)) for a broad class of models that includes IC and LT models and their mixtures, where n is the number of nodes and τ is the number of diffusion steps. Generally τ ≪ n, so this significantly improves over the generic upper bound of O(s n ε^{-2} ln (n/δ)). Our sample complexity bounds are derived from novel upper bounds on the variance of the reachability that allow for small relative error for influential sets and additive error when influence is small. Moreover, we provide a data-adaptive method that can detect and utilize fewer simulations on models where it suffices. Finally, we provide an efficient greedy design that computes an (1-1/e-ε)-approximate maximizer from simulations and applies to any submodular stochastic diffusion model that satisfies the variance bounds.

Cite as

Gal Sadeh, Edith Cohen, and Haim Kaplan. Sample Complexity Bounds for Influence Maximization. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 29:1-29:36, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{sadeh_et_al:LIPIcs.ITCS.2020.29,
  author =	{Sadeh, Gal and Cohen, Edith and Kaplan, Haim},
  title =	{{Sample Complexity Bounds for Influence Maximization}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{29:1--29:36},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.29},
  URN =		{urn:nbn:de:0030-drops-117140},
  doi =		{10.4230/LIPIcs.ITCS.2020.29},
  annote =	{Keywords: Sample complexity, Influence maximization, Submodular maximization}
}
Document
Triangles and Girth in Disk Graphs and Transmission Graphs

Authors: Haim Kaplan, Katharina Klost, Wolfgang Mulzer, Liam Roditty, Paul Seiferth, and Micha Sharir

Published in: LIPIcs, Volume 144, 27th Annual European Symposium on Algorithms (ESA 2019)


Abstract
Let S subset R^2 be a set of n sites, where each s in S has an associated radius r_s > 0. The disk graph D(S) is the undirected graph with vertex set S and an undirected edge between two sites s, t in S if and only if |st| <= r_s + r_t, i.e., if the disks with centers s and t and respective radii r_s and r_t intersect. Disk graphs are used to model sensor networks. Similarly, the transmission graph T(S) is the directed graph with vertex set S and a directed edge from a site s to a site t if and only if |st| <= r_s, i.e., if t lies in the disk with center s and radius r_s. We provide algorithms for detecting (directed) triangles and, more generally, computing the length of a shortest cycle (the girth) in D(S) and in T(S). These problems are notoriously hard in general, but better solutions exist for special graph classes such as planar graphs. We obtain similarly efficient results for disk graphs and for transmission graphs. More precisely, we show that a shortest (Euclidean) triangle in D(S) and in T(S) can be found in O(n log n) expected time, and that the (weighted) girth of D(S) can be found in O(n log n) expected time. For this, we develop new tools for batched range searching that may be of independent interest.

Cite as

Haim Kaplan, Katharina Klost, Wolfgang Mulzer, Liam Roditty, Paul Seiferth, and Micha Sharir. Triangles and Girth in Disk Graphs and Transmission Graphs. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 64:1-64:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{kaplan_et_al:LIPIcs.ESA.2019.64,
  author =	{Kaplan, Haim and Klost, Katharina and Mulzer, Wolfgang and Roditty, Liam and Seiferth, Paul and Sharir, Micha},
  title =	{{Triangles and Girth in Disk Graphs and Transmission Graphs}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{64:1--64:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.64},
  URN =		{urn:nbn:de:0030-drops-111859},
  doi =		{10.4230/LIPIcs.ESA.2019.64},
  annote =	{Keywords: disk graph, transmission graph, triangle, girth}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
A Faster Deterministic Exponential Time Algorithm for Energy Games and Mean Payoff Games (Track B: Automata, Logic, Semantics, and Theory of Programming)

Authors: Dani Dorfman, Haim Kaplan, and Uri Zwick

Published in: LIPIcs, Volume 132, 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)


Abstract
We present an improved exponential time algorithm for Energy Games, and hence also for Mean Payoff Games. The running time of the new algorithm is O (min(m n W, m n 2^{n/2} log W)), where n is the number of vertices, m is the number of edges, and when the edge weights are integers of absolute value at most W. For small values of W, the algorithm matches the performance of the pseudopolynomial time algorithm of Brim et al. on which it is based. For W >= n2^{n/2}, the new algorithm is faster than the algorithm of Brim et al. and is currently the fastest deterministic algorithm for Energy Games and Mean Payoff Games. The new algorithm is obtained by introducing a technique of forecasting repetitive actions performed by the algorithm of Brim et al., along with the use of an edge-weight scaling technique.

Cite as

Dani Dorfman, Haim Kaplan, and Uri Zwick. A Faster Deterministic Exponential Time Algorithm for Energy Games and Mean Payoff Games (Track B: Automata, Logic, Semantics, and Theory of Programming). In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 114:1-114:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{dorfman_et_al:LIPIcs.ICALP.2019.114,
  author =	{Dorfman, Dani and Kaplan, Haim and Zwick, Uri},
  title =	{{A Faster Deterministic Exponential Time Algorithm for Energy Games and Mean Payoff Games}},
  booktitle =	{46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
  pages =	{114:1--114:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-109-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{132},
  editor =	{Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.114},
  URN =		{urn:nbn:de:0030-drops-106909},
  doi =		{10.4230/LIPIcs.ICALP.2019.114},
  annote =	{Keywords: Energy Games, Mean Payoff Games, Scaling}
}
Document
General Techniques for Approximate Incidences and Their Application to the Camera Posing Problem

Authors: Dror Aiger, Haim Kaplan, Efi Kokiopoulou, Micha Sharir, and Bernhard Zeisl

Published in: LIPIcs, Volume 129, 35th International Symposium on Computational Geometry (SoCG 2019)


Abstract
We consider the classical camera pose estimation problem that arises in many computer vision applications, in which we are given n 2D-3D correspondences between points in the scene and points in the camera image (some of which are incorrect associations), and where we aim to determine the camera pose (the position and orientation of the camera in the scene) from this data. We demonstrate that this posing problem can be reduced to the problem of computing epsilon-approximate incidences between two-dimensional surfaces (derived from the input correspondences) and points (on a grid) in a four-dimensional pose space. Similar reductions can be applied to other camera pose problems, as well as to similar problems in related application areas. We describe and analyze three techniques for solving the resulting epsilon-approximate incidences problem in the context of our camera posing application. The first is a straightforward assignment of surfaces to the cells of a grid (of side-length epsilon) that they intersect. The second is a variant of a primal-dual technique, recently introduced by a subset of the authors [Aiger et al., 2017] for different (and simpler) applications. The third is a non-trivial generalization of a data structure Fonseca and Mount [Da Fonseca and Mount, 2010], originally designed for the case of hyperplanes. We present and analyze this technique in full generality, and then apply it to the camera posing problem at hand. We compare our methods experimentally on real and synthetic data. Our experiments show that for the typical values of n and epsilon, the primal-dual method is the fastest, also in practice.

Cite as

Dror Aiger, Haim Kaplan, Efi Kokiopoulou, Micha Sharir, and Bernhard Zeisl. General Techniques for Approximate Incidences and Their Application to the Camera Posing Problem. In 35th International Symposium on Computational Geometry (SoCG 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 129, pp. 8:1-8:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{aiger_et_al:LIPIcs.SoCG.2019.8,
  author =	{Aiger, Dror and Kaplan, Haim and Kokiopoulou, Efi and Sharir, Micha and Zeisl, Bernhard},
  title =	{{General Techniques for Approximate Incidences and Their Application to the Camera Posing Problem}},
  booktitle =	{35th International Symposium on Computational Geometry (SoCG 2019)},
  pages =	{8:1--8:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-104-7},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{129},
  editor =	{Barequet, Gill and Wang, Yusu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2019.8},
  URN =		{urn:nbn:de:0030-drops-104129},
  doi =		{10.4230/LIPIcs.SoCG.2019.8},
  annote =	{Keywords: Camera positioning, Approximate incidences, Incidences}
}
Document
Selection from Heaps, Row-Sorted Matrices, and X+Y Using Soft Heaps

Authors: Haim Kaplan, László Kozma, Or Zamir, and Uri Zwick

Published in: OASIcs, Volume 69, 2nd Symposium on Simplicity in Algorithms (SOSA 2019)


Abstract
We use soft heaps to obtain simpler optimal algorithms for selecting the k-th smallest item, and the set of k smallest items, from a heap-ordered tree, from a collection of sorted lists, and from X+Y, where X and Y are two unsorted sets. Our results match, and in some ways extend and improve, classical results of Frederickson (1993) and Frederickson and Johnson (1982). In particular, for selecting the k-th smallest item, or the set of k smallest items, from a collection of m sorted lists we obtain a new optimal "output-sensitive" algorithm that performs only O(m + sum_{i=1}^m log(k_i+1)) comparisons, where k_i is the number of items of the i-th list that belong to the overall set of k smallest items.

Cite as

Haim Kaplan, László Kozma, Or Zamir, and Uri Zwick. Selection from Heaps, Row-Sorted Matrices, and X+Y Using Soft Heaps. In 2nd Symposium on Simplicity in Algorithms (SOSA 2019). Open Access Series in Informatics (OASIcs), Volume 69, pp. 5:1-5:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{kaplan_et_al:OASIcs.SOSA.2019.5,
  author =	{Kaplan, Haim and Kozma, L\'{a}szl\'{o} and Zamir, Or and Zwick, Uri},
  title =	{{Selection from Heaps, Row-Sorted Matrices, and X+Y Using Soft Heaps}},
  booktitle =	{2nd Symposium on Simplicity in Algorithms (SOSA 2019)},
  pages =	{5:1--5:21},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-099-6},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{69},
  editor =	{Fineman, Jeremy T. and Mitzenmacher, Michael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SOSA.2019.5},
  URN =		{urn:nbn:de:0030-drops-100315},
  doi =		{10.4230/OASIcs.SOSA.2019.5},
  annote =	{Keywords: selection, soft heap}
}
Document
Approximate Minimum-Weight Matching with Outliers Under Translation

Authors: Pankaj K. Agarwal, Haim Kaplan, Geva Kipper, Wolfgang Mulzer, Günter Rote, Micha Sharir, and Allen Xiao

Published in: LIPIcs, Volume 123, 29th International Symposium on Algorithms and Computation (ISAAC 2018)


Abstract
Our goal is to compare two planar point sets by finding subsets of a given size such that a minimum-weight matching between them has the smallest weight. This can be done by a translation of one set that minimizes the weight of the matching. We give efficient algorithms (a) for finding approximately optimal matchings, when the cost of a matching is the L_p-norm of the tuple of the Euclidean distances between the pairs of matched points, for any p in [1,infty], and (b) for constructing small-size approximate minimization (or matching) diagrams: partitions of the translation space into regions, together with an approximate optimal matching for each region.

Cite as

Pankaj K. Agarwal, Haim Kaplan, Geva Kipper, Wolfgang Mulzer, Günter Rote, Micha Sharir, and Allen Xiao. Approximate Minimum-Weight Matching with Outliers Under Translation. In 29th International Symposium on Algorithms and Computation (ISAAC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 123, pp. 26:1-26:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{agarwal_et_al:LIPIcs.ISAAC.2018.26,
  author =	{Agarwal, Pankaj K. and Kaplan, Haim and Kipper, Geva and Mulzer, Wolfgang and Rote, G\"{u}nter and Sharir, Micha and Xiao, Allen},
  title =	{{Approximate Minimum-Weight Matching with Outliers Under Translation}},
  booktitle =	{29th International Symposium on Algorithms and Computation (ISAAC 2018)},
  pages =	{26:1--26:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-094-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{123},
  editor =	{Hsu, Wen-Lian and Lee, Der-Tsai and Liao, Chung-Shou},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2018.26},
  URN =		{urn:nbn:de:0030-drops-99747},
  doi =		{10.4230/LIPIcs.ISAAC.2018.26},
  annote =	{Keywords: Minimum-weight partial matching, Pattern matching, Approximation}
}
Document
Stabbing Pairwise Intersecting Disks by Five Points

Authors: Sariel Har-Peled, Haim Kaplan, Wolfgang Mulzer, Liam Roditty, Paul Seiferth, Micha Sharir, and Max Willert

Published in: LIPIcs, Volume 123, 29th International Symposium on Algorithms and Computation (ISAAC 2018)


Abstract
Suppose we are given a set D of n pairwise intersecting disks in the plane. A planar point set P stabs D if and only if each disk in D contains at least one point from P. We present a deterministic algorithm that takes O(n) time to find five points that stab D. Furthermore, we give a simple example of 13 pairwise intersecting disks that cannot be stabbed by three points. This provides a simple - albeit slightly weaker - algorithmic version of a classical result by Danzer that such a set D can always be stabbed by four points.

Cite as

Sariel Har-Peled, Haim Kaplan, Wolfgang Mulzer, Liam Roditty, Paul Seiferth, Micha Sharir, and Max Willert. Stabbing Pairwise Intersecting Disks by Five Points. In 29th International Symposium on Algorithms and Computation (ISAAC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 123, pp. 50:1-50:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{harpeled_et_al:LIPIcs.ISAAC.2018.50,
  author =	{Har-Peled, Sariel and Kaplan, Haim and Mulzer, Wolfgang and Roditty, Liam and Seiferth, Paul and Sharir, Micha and Willert, Max},
  title =	{{Stabbing Pairwise Intersecting Disks by Five Points}},
  booktitle =	{29th International Symposium on Algorithms and Computation (ISAAC 2018)},
  pages =	{50:1--50:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-094-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{123},
  editor =	{Hsu, Wen-Lian and Lee, Der-Tsai and Liao, Chung-Shou},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2018.50},
  URN =		{urn:nbn:de:0030-drops-99989},
  doi =		{10.4230/LIPIcs.ISAAC.2018.50},
  annote =	{Keywords: Disk graph, piercing set, LP-type problem}
}
Document
Pairing heaps: the forward variant

Authors: Dani Dorfman, Haim Kaplan, László Kozma, and Uri Zwick

Published in: LIPIcs, Volume 117, 43rd International Symposium on Mathematical Foundations of Computer Science (MFCS 2018)


Abstract
The pairing heap is a classical heap data structure introduced in 1986 by Fredman, Sedgewick, Sleator, and Tarjan. It is remarkable both for its simplicity and for its excellent performance in practice. The "magic" of pairing heaps lies in the restructuring that happens after the deletion of the smallest item. The resulting collection of trees is consolidated in two rounds: a left-to-right pairing round, followed by a right-to-left accumulation round. Fredman et al. showed, via an elegant correspondence to splay trees, that in a pairing heap of size n all heap operations take O(log n) amortized time. They also proposed an arguably more natural variant, where both pairing and accumulation are performed in a combined left-to-right round (called the forward variant of pairing heaps). The analogy to splaying breaks down in this case, and the analysis of the forward variant was left open. In this paper we show that inserting an item and deleting the minimum in a forward-variant pairing heap both take amortized time O(log(n) * 4^(sqrt(log n))). This is the first improvement over the O(sqrt(n)) bound showed by Fredman et al. three decades ago. Our analysis relies on a new potential function that tracks parent-child rank-differences in the heap.

Cite as

Dani Dorfman, Haim Kaplan, László Kozma, and Uri Zwick. Pairing heaps: the forward variant. In 43rd International Symposium on Mathematical Foundations of Computer Science (MFCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 117, pp. 13:1-13:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{dorfman_et_al:LIPIcs.MFCS.2018.13,
  author =	{Dorfman, Dani and Kaplan, Haim and Kozma, L\'{a}szl\'{o} and Zwick, Uri},
  title =	{{Pairing heaps: the forward variant}},
  booktitle =	{43rd International Symposium on Mathematical Foundations of Computer Science (MFCS 2018)},
  pages =	{13:1--13:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-086-6},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{117},
  editor =	{Potapov, Igor and Spirakis, Paul and Worrell, James},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2018.13},
  URN =		{urn:nbn:de:0030-drops-95956},
  doi =		{10.4230/LIPIcs.MFCS.2018.13},
  annote =	{Keywords: data structure, priority queue, pairing heap}
}
Document
Improved Bounds for Multipass Pairing Heaps and Path-Balanced Binary Search Trees

Authors: Dani Dorfman, Haim Kaplan, László Kozma, Seth Pettie, and Uri Zwick

Published in: LIPIcs, Volume 112, 26th Annual European Symposium on Algorithms (ESA 2018)


Abstract
We revisit multipass pairing heaps and path-balanced binary search trees (BSTs), two classical algorithms for data structure maintenance. The pairing heap is a simple and efficient "self-adjusting" heap, introduced in 1986 by Fredman, Sedgewick, Sleator, and Tarjan. In the multipass variant (one of the original pairing heap variants described by Fredman et al.) the minimum item is extracted via repeated pairing rounds in which neighboring siblings are linked. Path-balanced BSTs, proposed by Sleator (cf. Subramanian, 1996), are a natural alternative to Splay trees (Sleator and Tarjan, 1983). In a path-balanced BST, whenever an item is accessed, the search path leading to that item is re-arranged into a balanced tree. Despite their simplicity, both algorithms turned out to be difficult to analyse. Fredman et al. showed that operations in multipass pairing heaps take amortized O(log n * log log n / log log log n) time. For searching in path-balanced BSTs, Balasubramanian and Raman showed in 1995 the same amortized time bound of O(log n * log log n / log log log n), using a different argument. In this paper we show an explicit connection between the two algorithms and improve both bounds to O(log n * 2^{log^* n} * log^* n), respectively O(log n * 2^{log^* n} * (log^* n)^2), where log^* denotes the slowly growing iterated logarithm function. These are the first improvements in more than three, resp. two decades, approaching the information-theoretic lower bound of Omega(log n).

Cite as

Dani Dorfman, Haim Kaplan, László Kozma, Seth Pettie, and Uri Zwick. Improved Bounds for Multipass Pairing Heaps and Path-Balanced Binary Search Trees. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 24:1-24:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{dorfman_et_al:LIPIcs.ESA.2018.24,
  author =	{Dorfman, Dani and Kaplan, Haim and Kozma, L\'{a}szl\'{o} and Pettie, Seth and Zwick, Uri},
  title =	{{Improved Bounds for Multipass Pairing Heaps and Path-Balanced Binary Search Trees}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{24:1--24:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.24},
  URN =		{urn:nbn:de:0030-drops-94879},
  doi =		{10.4230/LIPIcs.ESA.2018.24},
  annote =	{Keywords: data structure, priority queue, pairing heap, binary search tree}
}
Document
Union of Hypercubes and 3D Minkowski Sums with Random Sizes

Authors: Pankaj K. Agarwal, Haim Kaplan, and Micha Sharir

Published in: LIPIcs, Volume 107, 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)


Abstract
Let T={triangle_1,...,triangle_n} be a set of of n pairwise-disjoint triangles in R^3, and let B be a convex polytope in R^3 with a constant number of faces. For each i, let C_i = triangle_i oplus r_i B denote the Minkowski sum of triangle_i with a copy of B scaled by r_i>0. We show that if the scaling factors r_1, ..., r_n are chosen randomly then the expected complexity of the union of C_1, ..., C_n is O(n^{2+epsilon), for any epsilon > 0; the constant of proportionality depends on epsilon and the complexity of B. The worst-case bound can be Theta(n^3). We also consider a special case of this problem in which T is a set of points in R^3 and B is a unit cube in R^3, i.e., each C_i is a cube of side-length 2r_i. We show that if the scaling factors are chosen randomly then the expected complexity of the union of the cubes is O(n log^2 n), and it improves to O(n log n) if the scaling factors are chosen randomly from a "well-behaved" probability density function (pdf). We also extend the latter results to higher dimensions. For any fixed odd value of d, we show that the expected complexity of the union of the hypercubes is O(n^floor[d/2] log n) and the bound improves to O(n^floor[d/2]) if the scaling factors are chosen from a "well-behaved" pdf. The worst-case bounds are Theta(n^2) in R^3, and Theta(n^{ceil[d/2]}) in higher dimensions.

Cite as

Pankaj K. Agarwal, Haim Kaplan, and Micha Sharir. Union of Hypercubes and 3D Minkowski Sums with Random Sizes. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 10:1-10:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{agarwal_et_al:LIPIcs.ICALP.2018.10,
  author =	{Agarwal, Pankaj K. and Kaplan, Haim and Sharir, Micha},
  title =	{{Union of Hypercubes and 3D Minkowski Sums with Random Sizes}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{10:1--10:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.10},
  URN =		{urn:nbn:de:0030-drops-90147},
  doi =		{10.4230/LIPIcs.ICALP.2018.10},
  annote =	{Keywords: Computational geometry, Minkowski sums, Axis-parallel cubes, Union of geometric objects, Objects with random sizes}
}
Document
Output Sensitive Algorithms for Approximate Incidences and Their Applications

Authors: Dror Aiger, Haim Kaplan, and Micha Sharir

Published in: LIPIcs, Volume 87, 25th Annual European Symposium on Algorithms (ESA 2017)


Abstract
An epsilon-approximate incidence between a point and some geometric object (line, circle, plane, sphere) occurs when the point and the object lie at distance at most epsilon from each other. Given a set of points and a set of objects, computing the approximate incidences between them is a major step in many database and web-based applications in computer vision and graphics, including robust model fitting, approximate point pattern matching, and estimating the fundamental matrix in epipolar (stereo) geometry. In a typical approximate incidence problem of this sort, we are given a set P of m points in two or three dimensions, a set S of n objects (lines, circles, planes, spheres), and an error parameter epsilon>0, and our goal is to report all pairs (p,s) in P times S that lie at distance at most epsilon from one another. We present efficient output-sensitive approximation algorithms for quite a few cases, including points and lines or circles in the plane, and points and planes, spheres, lines, or circles in three dimensions. Several of these cases arise in the applications mentioned above. Our algorithms report all pairs at distance <= epsilon, but may also report additional pairs, all of which are guaranteed to be at distance at most alphaepsilon, for some constant alpha>1. Our algorithms are based on simple primal and dual grid decompositions and are easy to implement. We note though that (a) the use of duality, which leads to significant improvements in the overhead cost of the algorithms, appears to be novel for this kind of problems; (b) the correct choice of duality in some of these problems is fairly intricate and requires some care; and (c) the correctness and performance analysis of the algorithms (especially in the more advanced versions) is fairly non-trivial. We analyze our algorithms and prove guaranteed upper bounds on their running time and on the "distortion" parameter alpha. We also briefly describe the motivating applications, and show how they can effectively exploit our solutions. The superior theoretical bounds on the performance of our algorithms, and their simplicity, make them indeed ideal tools for these applications. In a series of preliminary experimentations (not included in this abstract), we substantiate this feeling, and show that our algorithms lead in practice to significant improved performance of the aforementioned applications.

Cite as

Dror Aiger, Haim Kaplan, and Micha Sharir. Output Sensitive Algorithms for Approximate Incidences and Their Applications. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 5:1-5:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{aiger_et_al:LIPIcs.ESA.2017.5,
  author =	{Aiger, Dror and Kaplan, Haim and Sharir, Micha},
  title =	{{Output Sensitive Algorithms for Approximate Incidences and Their Applications}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{5:1--5:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.5},
  URN =		{urn:nbn:de:0030-drops-78224},
  doi =		{10.4230/LIPIcs.ESA.2017.5},
  annote =	{Keywords: Approximate incidences, near-neighbor reporting, duality, grid-based approximation}
}
Document
Finding Axis-Parallel Rectangles of Fixed Perimeter or Area Containing the Largest Number of Points

Authors: Haim Kaplan, Sasanka Roy, and Micha Sharir

Published in: LIPIcs, Volume 87, 25th Annual European Symposium on Algorithms (ESA 2017)


Abstract
Let P be a set of n points in the plane in general position, and consider the problem of finding an axis-parallel rectangle with a given perimeter, or area, or diagonal, that encloses the maximum number of points of P. We present an exact algorithm that finds such a rectangle in O(n^{5/2} log n) time, and, for the case of a fixed perimeter or diagonal, we also obtain (i) an improved exact algorithm that runs in O(nk^{3/2} log k) time, and (ii) an approximation algorithm that finds, in O(n+(n/(k epsilon^5))*log^{5/2}(n/k)log((1/epsilon) log(n/k))) time, a rectangle of the given perimeter or diagonal that contains at least (1-epsilon)k points of P, where k is the optimum value. We then show how to turn this algorithm into one that finds, for a given k, an axis-parallel rectangle of smallest perimeter (or area, or diagonal) that contains k points of P. We obtain the first subcubic algorithms for these problems, significantly improving the current state of the art.

Cite as

Haim Kaplan, Sasanka Roy, and Micha Sharir. Finding Axis-Parallel Rectangles of Fixed Perimeter or Area Containing the Largest Number of Points. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 52:1-52:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{kaplan_et_al:LIPIcs.ESA.2017.52,
  author =	{Kaplan, Haim and Roy, Sasanka and Sharir, Micha},
  title =	{{Finding Axis-Parallel Rectangles of Fixed Perimeter or Area Containing the Largest Number of Points}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{52:1--52:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.52},
  URN =		{urn:nbn:de:0030-drops-78608},
  doi =		{10.4230/LIPIcs.ESA.2017.52},
  annote =	{Keywords: Computational geometry, geometric optimization, rectangles, perimeter, area}
}
Document
Clustering in Hypergraphs to Minimize Average Edge Service Time

Authors: Ori Rottenstreich, Haim Kaplan, and Avinatan Hassidim

Published in: LIPIcs, Volume 87, 25th Annual European Symposium on Algorithms (ESA 2017)


Abstract
We study the problem of clustering the vertices of a weighted hypergraph such that on average the vertices of each edge can be covered by a small number of clusters. This problem has many applications such as for designing medical tests, clustering files on disk servers, and placing network services on servers. The edges of the hypergraph model groups of items that are likely to be needed together, and the optimization criteria which we use can be interpreted as the average delay (or cost) to serve the items of a typical edge. We describe and analyze algorithms for this problem for the case in which the clusters have to be disjoint and for the case where clusters can overlap. The analysis is often subtle and reveals interesting structure and invariants that one can utilize.

Cite as

Ori Rottenstreich, Haim Kaplan, and Avinatan Hassidim. Clustering in Hypergraphs to Minimize Average Edge Service Time. In 25th Annual European Symposium on Algorithms (ESA 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 87, pp. 64:1-64:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{rottenstreich_et_al:LIPIcs.ESA.2017.64,
  author =	{Rottenstreich, Ori and Kaplan, Haim and Hassidim, Avinatan},
  title =	{{Clustering in Hypergraphs to Minimize Average Edge Service Time}},
  booktitle =	{25th Annual European Symposium on Algorithms (ESA 2017)},
  pages =	{64:1--64:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-049-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{87},
  editor =	{Pruhs, Kirk and Sohler, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2017.64},
  URN =		{urn:nbn:de:0030-drops-78777},
  doi =		{10.4230/LIPIcs.ESA.2017.64},
  annote =	{Keywords: Clustering, average cover time, hypergraphs, set cover}
}
Document
Min-Cost Bipartite Perfect Matching with Delays

Authors: Itai Ashlagi, Yossi Azar, Moses Charikar, Ashish Chiplunkar, Ofir Geri, Haim Kaplan, Rahul Makhijani, Yuyi Wang, and Roger Wattenhofer

Published in: LIPIcs, Volume 81, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017)


Abstract
In the min-cost bipartite perfect matching with delays (MBPMD) problem, requests arrive online at points of a finite metric space. Each request is either positive or negative and has to be matched to a request of opposite polarity. As opposed to traditional online matching problems, the algorithm does not have to serve requests as they arrive, and may choose to match them later at a cost. Our objective is to minimize the sum of the distances between matched pairs of requests (the connection cost) and the sum of the waiting times of the requests (the delay cost). This objective exhibits a natural tradeoff between minimizing the distances and the cost of waiting for better matches. This tradeoff appears in many real-life scenarios, notably, ride-sharing platforms. MBPMD is related to its non-bipartite variant, min-cost perfect matching with delays (MPMD), in which each request can be matched to any other request. MPMD was introduced by Emek et al. (STOC'16), who showed an O(log^2(n)+log(Delta))-competitive randomized algorithm on n-point metric spaces with aspect ratio Delta. Our contribution is threefold. First, we present a new lower bound construction for MPMD and MBPMD. We get a lower bound of Omega(sqrt(log(n)/log(log(n)))) on the competitive ratio of any randomized algorithm for MBPMD. For MPMD, we improve the lower bound from Omega(sqrt(log(n))) (shown by Azar et al., SODA'17) to Omega(log(n)/log(log(n))), thus, almost matching their upper bound of O(log(n)). Second, we adapt the algorithm of Emek et al. to the bipartite case, and provide a simplified analysis that improves the competitive ratio to O(log(n)). The key ingredient of the algorithm is an O(h)-competitive randomized algorithm for MBPMD on weighted trees of height h. Third, we provide an O(h)-competitive deterministic algorithm for MBPMD on weighted trees of height h. This algorithm is obtained by adapting the algorithm for MPMD by Azar et al. to the apparently more complicated bipartite setting.

Cite as

Itai Ashlagi, Yossi Azar, Moses Charikar, Ashish Chiplunkar, Ofir Geri, Haim Kaplan, Rahul Makhijani, Yuyi Wang, and Roger Wattenhofer. Min-Cost Bipartite Perfect Matching with Delays. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 81, pp. 1:1-1:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{ashlagi_et_al:LIPIcs.APPROX-RANDOM.2017.1,
  author =	{Ashlagi, Itai and Azar, Yossi and Charikar, Moses and Chiplunkar, Ashish and Geri, Ofir and Kaplan, Haim and Makhijani, Rahul and Wang, Yuyi and Wattenhofer, Roger},
  title =	{{Min-Cost Bipartite Perfect Matching with Delays}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017)},
  pages =	{1:1--1:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-044-6},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{81},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} D. P. and Williamson, David P. and Vempala, Santosh S.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2017.1},
  URN =		{urn:nbn:de:0030-drops-75509},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2017.1},
  annote =	{Keywords: online algorithms with delayed service, bipartite matching, competitive analysis}
}
Document
Bottleneck Paths and Trees and Deterministic Graphical Games

Authors: Shiri Chechik, Haim Kaplan, Mikkel Thorup, Or Zamir, and Uri Zwick

Published in: LIPIcs, Volume 47, 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)


Abstract
Gabow and Tarjan showed that the Bottleneck Path (BP) problem, i.e., finding a path between a given source and a given target in a weighted directed graph whose largest edge weight is minimized, as well as the Bottleneck spanning tree (BST) problem, i.e., finding a directed spanning tree rooted at a given vertex whose largest edge weight is minimized, can both be solved deterministically in O(m * log^*(n)) time, where m is the number of edges and n is the number of vertices in the graph. We present a slightly improved randomized algorithm for these problems with an expected running time of O(m * beta(m,n)), where beta(m,n) = min{k >= 1 | log^{(k)}n <= m/n } <= log^*(n) - log^*(m/n)+1. This is the first improvement for these problems in over 25 years. In particular, if m >= n * log^{(k)} * n, for some constant k, the expected running time of the new algorithm is O(m). Our algorithm, as that of Gabow and Tarjan, work in the comparison model. We also observe that in the word-RAM model, both problems can be solved deterministically in O(m) time. Finally, we solve an open problem of Andersson et al., giving a deterministic O(m)-time comparison-based algorithm for solving deterministic 2-player turn-based zero-sum terminal payoff games, also known as Deterministic Graphical Games (DGG).

Cite as

Shiri Chechik, Haim Kaplan, Mikkel Thorup, Or Zamir, and Uri Zwick. Bottleneck Paths and Trees and Deterministic Graphical Games. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 27:1-27:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{chechik_et_al:LIPIcs.STACS.2016.27,
  author =	{Chechik, Shiri and Kaplan, Haim and Thorup, Mikkel and Zamir, Or and Zwick, Uri},
  title =	{{Bottleneck Paths and Trees and Deterministic Graphical Games}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{27:1--27:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.27},
  URN =		{urn:nbn:de:0030-drops-57283},
  doi =		{10.4230/LIPIcs.STACS.2016.27},
  annote =	{Keywords: bottleneck paths, comparison model, deterministic graphical games}
}
Document
Average Distance Queries through Weighted Samples in Graphs and Metric Spaces: High Scalability with Tight Statistical Guarantees

Authors: Shiri Chechik, Edith Cohen, and Haim Kaplan

Published in: LIPIcs, Volume 40, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)


Abstract
The average distance from a node to all other nodes in a graph, or from a query point in a metric space to a set of points, is a fundamental quantity in data analysis. The inverse of the average distance, known as the (classic) closeness centrality of a node, is a popular importance measure in the study of social networks. We develop novel structural insights on the sparsifiability of the distance relation via weighted sampling. Based on that, we present highly practical algorithms with strong statistical guarantees for fundamental problems. We show that the average distance (and hence the centrality) for all nodes in a graph can be estimated using O(epsilon^{-2}) single-source distance computations. For a set V of n points in a metric space, we show that after preprocessing which uses O(n) distance computations we can compute a weighted sample S subset of V of size O(epsilon^{-2}) such that the average distance from any query point v to V can be estimated from the distances from v to S. Finally, we show that for a set of points V in a metric space, we can estimate the average pairwise distance using O(n+epsilon^{-2}) distance computations. The estimate is based on a weighted sample of O(epsilon^{-2}) pairs of points, which is computed using O(n) distance computations. Our estimates are unbiased with normalized mean square error (NRMSE) of at most epsilon. Increasing the sample size by a O(log(n)) factor ensures that the probability that the relative error exceeds epsilon is polynomially small.

Cite as

Shiri Chechik, Edith Cohen, and Haim Kaplan. Average Distance Queries through Weighted Samples in Graphs and Metric Spaces: High Scalability with Tight Statistical Guarantees. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 40, pp. 659-679, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{chechik_et_al:LIPIcs.APPROX-RANDOM.2015.659,
  author =	{Chechik, Shiri and Cohen, Edith and Kaplan, Haim},
  title =	{{Average Distance Queries through Weighted Samples in Graphs and Metric Spaces: High Scalability with Tight Statistical Guarantees}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015)},
  pages =	{659--679},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-89-7},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{40},
  editor =	{Garg, Naveen and Jansen, Klaus and Rao, Anup and Rolim, Jos\'{e} D. P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2015.659},
  URN =		{urn:nbn:de:0030-drops-53291},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2015.659},
  annote =	{Keywords: Closeness Centrality; Average Distance; Metric Space; Weighted Sampling}
}
Document
Spanners and Reachability Oracles for Directed Transmission Graphs

Authors: Haim Kaplan, Wolfgang Mulzer, Liam Roditty, and Paul Seiferth

Published in: LIPIcs, Volume 34, 31st International Symposium on Computational Geometry (SoCG 2015)


Abstract
Let P be a set of n points in d dimensions, each with an associated radius r_p > 0. The transmission graph G for P has vertex set P and an edge from p to q if and only if q lies in the ball with radius r_p around p. Let t > 1. A t-spanner H for G is a sparse subgraph of G such that for any two vertices p, q connected by a path of length l in G, there is a p-q-path of length at most tl in H. We show how to compute a t-spanner for G if d=2. The running time is O(n (log n + log Psi)), where Psi is the ratio of the largest and smallest radius of two points in P. We extend this construction to be independent of Psi at the expense of a polylogarithmic overhead in the running time. As a first application, we prove a property of the t-spanner that allows us to find a BFS tree in G for any given start vertex s of P in the same time. After that, we deal with reachability oracles for G. These are data structures that answer reachability queries: given two vertices, is there a directed path between them? The quality of a reachability oracle is measured by the space S(n), the query time Q(n), and the preproccesing time. For d=1, we show how to compute an oracle with Q(n) = O(1) and S(n) = O(n) in time O(n log n). For d=2, the radius ratio Psi again turns out to be an important measure for the complexity of the problem. We present three different data structures whose quality depends on Psi: (i) if Psi < sqrt(3), we achieve Q(n) = O(1) with S(n) = O(n) and preproccesing time O(n log n); (ii) if Psi >= sqrt(3), we get Q(n) = O(Psi^3 sqrt(n)) and S(n) = O(Psi^5 n^(3/2)); and (iii) if Psi is polynomially bounded in n, we use probabilistic methods to obtain an oracle with Q(n) = O(n^(2/3)log n) and S(n) = O(n^(5/3) log n) that answers queries correctly with high probability. We employ our t-spanner to achieve a fast preproccesing time of O(Psi^5 n^(3/2)) and O(n^(5/3) log^2 n) in case (ii) and (iii), respectively.

Cite as

Haim Kaplan, Wolfgang Mulzer, Liam Roditty, and Paul Seiferth. Spanners and Reachability Oracles for Directed Transmission Graphs. In 31st International Symposium on Computational Geometry (SoCG 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 34, pp. 156-170, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{kaplan_et_al:LIPIcs.SOCG.2015.156,
  author =	{Kaplan, Haim and Mulzer, Wolfgang and Roditty, Liam and Seiferth, Paul},
  title =	{{Spanners and Reachability Oracles for Directed Transmission Graphs}},
  booktitle =	{31st International Symposium on Computational Geometry (SoCG 2015)},
  pages =	{156--170},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-83-5},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{34},
  editor =	{Arge, Lars and Pach, J\'{a}nos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SOCG.2015.156},
  URN =		{urn:nbn:de:0030-drops-51062},
  doi =		{10.4230/LIPIcs.SOCG.2015.156},
  annote =	{Keywords: Transmission Graphs, Reachability Oracles, Spanner, Intersection Graph}
}
Document
Minimum Cost Flows in Graphs with Unit Capacities

Authors: Andrew V. Goldberg, Haim Kaplan, Sagi Hed, and Robert E. Tarjan

Published in: LIPIcs, Volume 30, 32nd International Symposium on Theoretical Aspects of Computer Science (STACS 2015)


Abstract
We consider the minimum cost flow problem on graphs with unit capacities and its special cases. In previous studies, special purpose algorithms exploiting the fact that capacities are one have been developed. In contrast, for maximum flow with unit capacities, the best bounds are proven for slight modifications of classical blocking flow and push-relabel algorithms. In this paper we show that the classical cost scaling algorithms of Goldberg and Tarjan (for general integer capacities) applied to a problem with unit capacities achieve or improve the best known bounds. For weighted bipartite matching we establish a bound of O(\sqrt{rm}\log C) on a slight variation of this algorithm. Here r is the size of the smaller side of the bipartite graph, m is the number of edges, and C is the largest absolute value of an arc-cost. This simplifies a result of [Duan et al. 2011] and improves the bound, answering an open question of [Tarjan and Ramshaw 2012]. For graphs with unit vertex capacities we establish a novel O(\sqrt{n}m\log(nC)) bound. We also give the first cycle canceling algorithm for minimum cost flow with unit capacities. The algorithm naturally generalizes the single source shortest path algorithm of [Goldberg 1995].

Cite as

Andrew V. Goldberg, Haim Kaplan, Sagi Hed, and Robert E. Tarjan. Minimum Cost Flows in Graphs with Unit Capacities. In 32nd International Symposium on Theoretical Aspects of Computer Science (STACS 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 30, pp. 406-419, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{goldberg_et_al:LIPIcs.STACS.2015.406,
  author =	{Goldberg, Andrew V. and Kaplan, Haim and Hed, Sagi and Tarjan, Robert E.},
  title =	{{Minimum Cost Flows in Graphs with Unit Capacities}},
  booktitle =	{32nd International Symposium on Theoretical Aspects of Computer Science (STACS 2015)},
  pages =	{406--419},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-78-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{30},
  editor =	{Mayr, Ernst W. and Ollinger, Nicolas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2015.406},
  URN =		{urn:nbn:de:0030-drops-49304},
  doi =		{10.4230/LIPIcs.STACS.2015.406},
  annote =	{Keywords: minimum cost flow, bipartite matching, unit capacity, cost scaling}
}
Document
Minimum s-t cut in undirected planar graphs when the source and the sink are close

Authors: Haim Kaplan and Yahav Nussbaum

Published in: LIPIcs, Volume 9, 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)


Abstract
Consider the minimum s-t cut problem in an embedded undirected planar graph. Let p be the minimum number of faces that a curve from s to $t$ passes through. If p=1, that is, the vertices s and t are on the boundary of the same face, then the minimum cut can be found in O(n)time. For general planar graphs this cut can be found in O(n log n) time. We unify these results and give an O(n log p) time algorithm. We use cut-cycles to obtain the value of the minimum cut, and study the structure of these cycles to get an efficient algorithm.

Cite as

Haim Kaplan and Yahav Nussbaum. Minimum s-t cut in undirected planar graphs when the source and the sink are close. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 117-128, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{kaplan_et_al:LIPIcs.STACS.2011.117,
  author =	{Kaplan, Haim and Nussbaum, Yahav},
  title =	{{Minimum s-t cut in undirected planar graphs when the source and the sink are close}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{117--128},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.117},
  URN =		{urn:nbn:de:0030-drops-30049},
  doi =		{10.4230/LIPIcs.STACS.2011.117},
  annote =	{Keywords: planar graph, minimum cut, shortest path, cut cycle}
}
Document
Strong Price of Anarchy for Machine Load Balancing

Authors: Amos Fiat, Meital Levy, Haim Kaplan, and Svetlana Olonetsky

Published in: Dagstuhl Seminar Proceedings, Volume 7261, Fair Division (2007)


Abstract
As defined by Aumann in 1959, a strong equilibrium is a Nash equilibrium that is resilient to deviations by coalitions. We give tight bounds on the strong price of anarchy for load balancing on related machines. We also give tight bounds for $k$-strong equilibria, where the size of a deviating coalition is at most $k$, for unrelated machines.

Cite as

Amos Fiat, Meital Levy, Haim Kaplan, and Svetlana Olonetsky. Strong Price of Anarchy for Machine Load Balancing. In Fair Division. Dagstuhl Seminar Proceedings, Volume 7261, pp. 1-19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2007)


Copy BibTex To Clipboard

@InProceedings{fiat_et_al:DagSemProc.07261.12,
  author =	{Fiat, Amos and Levy, Meital and Kaplan, Haim and Olonetsky, Svetlana},
  title =	{{Strong Price of Anarchy  for  Machine Load Balancing}},
  booktitle =	{Fair Division},
  pages =	{1--19},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2007},
  volume =	{7261},
  editor =	{Steven Brams and Kirk Pruhs and Gerhard Woeginger},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.07261.12},
  URN =		{urn:nbn:de:0030-drops-12256},
  doi =		{10.4230/DagSemProc.07261.12},
  annote =	{Keywords: Game theory, Strong Nash equilibria, Load balancing, Price of Anarchy}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail