142 Search Results for "Pagh, Rasmus"


Volume

LIPIcs, Volume 204

29th Annual European Symposium on Algorithms (ESA 2021)

ESA 2021, September 6-8, 2021, Lisbon, Portugal (Virtual Conference)

Editors: Petra Mutzel, Rasmus Pagh, and Grzegorz Herman

Volume

LIPIcs, Volume 53

15th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2016)

SWAT 2016, June 22-24, 2016, Reykjavik, Iceland

Editors: Rasmus Pagh

Document
Smooth Sensitivity Revisited: Towards Optimality

Authors: Richard Hladík and Jakub Tětek

Published in: LIPIcs, Volume 329, 6th Symposium on Foundations of Responsible Computing (FORC 2025)


Abstract
Smooth sensitivity is one of the most commonly used techniques for designing practical differentially private mechanisms. In this approach, one computes the smooth sensitivity of a given query q on the given input D and releases q(D) with noise added proportional to this smooth sensitivity. One question remains: what distribution should we pick the noise from? In this paper, we give a new class of distributions suitable for the use with smooth sensitivity, which we name the PolyPlace distribution. This distribution improves upon the state-of-the-art Student’s T distribution in terms of standard deviation by arbitrarily large factors, depending on a "smoothness parameter" γ, which one has to set in the smooth sensitivity framework. Moreover, our distribution is defined for a wider range of parameter γ, which can lead to significantly better performance. Furthermore, we prove that the PolyPlace distribution converges for γ → 0 to the Laplace distribution and so does its variance. This means that the Laplace mechanism is a limit special case of the PolyPlace mechanism. This implies that our mechanism is in a certain sense optimal for γ → 0.

Cite as

Richard Hladík and Jakub Tětek. Smooth Sensitivity Revisited: Towards Optimality. In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 2:1-2:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{hladik_et_al:LIPIcs.FORC.2025.2,
  author =	{Hlad{\'\i}k, Richard and T\v{e}tek, Jakub},
  title =	{{Smooth Sensitivity Revisited: Towards Optimality}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{2:1--2:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.2},
  URN =		{urn:nbn:de:0030-drops-231292},
  doi =		{10.4230/LIPIcs.FORC.2025.2},
  annote =	{Keywords: differential privacy, smooth sensitivity}
}
Document
Near-Universally-Optimal Differentially Private Minimum Spanning Trees

Authors: Richard Hladík and Jakub Tětek

Published in: LIPIcs, Volume 329, 6th Symposium on Foundations of Responsible Computing (FORC 2025)


Abstract
Devising mechanisms with good beyond-worst-case input-dependent performance has been an important focus of differential privacy, with techniques such as smooth sensitivity, propose-test-release, or inverse sensitivity mechanism being developed to achieve this goal. This makes it very natural to use the notion of universal optimality in differential privacy. Universal optimality is a strong instance-specific optimality guarantee for problems on weighted graphs, which roughly states that for any fixed underlying (unweighted) graph, the algorithm is optimal in the worst-case sense, with respect to the possible setting of the edge weights. In this paper, we give the first such result in differential privacy. Namely, we prove that a simple differentially private mechanism for approximately releasing the minimum spanning tree is near-optimal in the sense of universal optimality for the 𝓁₁ neighbor relation. Previously, it was only known that this mechanism is nearly optimal in the worst case. We then focus on the 𝓁_∞ neighbor relation, for which the described mechanism is not optimal. We show that one may implement the exponential mechanism for MST in polynomial time, and that this results in universal near-optimality for both the 𝓁₁ and the 𝓁_∞ neighbor relations.

Cite as

Richard Hladík and Jakub Tětek. Near-Universally-Optimal Differentially Private Minimum Spanning Trees. In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 6:1-6:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{hladik_et_al:LIPIcs.FORC.2025.6,
  author =	{Hlad{\'\i}k, Richard and T\v{e}tek, Jakub},
  title =	{{Near-Universally-Optimal Differentially Private Minimum Spanning Trees}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{6:1--6:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.6},
  URN =		{urn:nbn:de:0030-drops-231337},
  doi =		{10.4230/LIPIcs.FORC.2025.6},
  annote =	{Keywords: differential privacy, universal optimality, minimum spanning trees}
}
Document
Count on Your Elders: Laplace vs Gaussian Noise

Authors: Joel Daniel Andersson, Rasmus Pagh, Teresa Anna Steiner, and Sahel Torkamani

Published in: LIPIcs, Volume 329, 6th Symposium on Foundations of Responsible Computing (FORC 2025)


Abstract
In recent years, Gaussian noise has become a popular tool in differentially private algorithms, often replacing Laplace noise which dominated the early literature on differential privacy. Gaussian noise is the standard approach to approximate differential privacy, often resulting in much higher utility than traditional (pure) differential privacy mechanisms. In this paper we argue that Laplace noise may in fact be preferable to Gaussian noise in many settings, in particular when we seek to achieve (ε,δ)-differential privacy for small values of δ. We consider two scenarios: First, we consider the problem of counting under continual observation and present a new generalization of the binary tree mechanism that uses a k-ary number system with negative digits to improve the privacy-accuracy trade-off. Our mechanism uses Laplace noise and whenever δ is sufficiently small it improves the mean squared error over the best possible (ε,δ)-differentially private factorization mechanisms based on Gaussian noise. Specifically, using k = 19 we get an asymptotic improvement over the bound given in the work by Henzinger, Upadhyay and Upadhyay (SODA 2023) when δ = O(T^{-0.92}). Second, we show that the noise added by the Gaussian mechanism can always be replaced by Laplace noise of comparable variance for the same (ε, δ)-differential privacy guarantee, and in fact for sufficiently small δ the variance of the Laplace noise becomes strictly better. This challenges the conventional wisdom that Gaussian noise should be used for high-dimensional noise. Finally, we study whether counting under continual observation may be easier in an average-case sense than in a worst-case sense. We show that, under pure differential privacy, the expected worst-case error for a random input must be Ω(log(T)/ε), matching the known lower bound for worst-case inputs.

Cite as

Joel Daniel Andersson, Rasmus Pagh, Teresa Anna Steiner, and Sahel Torkamani. Count on Your Elders: Laplace vs Gaussian Noise. In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 10:1-10:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{andersson_et_al:LIPIcs.FORC.2025.10,
  author =	{Andersson, Joel Daniel and Pagh, Rasmus and Steiner, Teresa Anna and Torkamani, Sahel},
  title =	{{Count on Your Elders: Laplace vs Gaussian Noise}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{10:1--10:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.10},
  URN =		{urn:nbn:de:0030-drops-231376},
  doi =		{10.4230/LIPIcs.FORC.2025.10},
  annote =	{Keywords: differential privacy, continual observation, streaming, prefix sums, trees}
}
Document
Differentially Private High-Dimensional Approximate Range Counting, Revisited

Authors: Martin Aumüller, Fabrizio Boninsegna, and Francesco Silvestri

Published in: LIPIcs, Volume 329, 6th Symposium on Foundations of Responsible Computing (FORC 2025)


Abstract
Locality Sensitive Filters are known for offering a quasi-linear space data structure with rigorous guarantees for the Approximate Near Neighbor search (ANN) problem. Building on Locality Sensitive Filters, we derive a simple data structure for the Approximate Near Neighbor Counting (ANNC) problem under differential privacy (DP). Moreover, we provide a simple analysis leveraging a connection with concomitant statistics and extreme value theory. Our approach produces a simple data structure with a tunable parameter that regulates a trade-off between space-time and utility. Through this trade-off, our data structure achieves the same performance as the recent findings of Andoni et al. (NeurIPS 2023) while offering better utility at the cost of higher space and query time. In addition, we provide a more efficient algorithm under pure ε-DP and elucidate the connection between ANN and differentially private ANNC. As a side result, the paper provides a more compact description and analysis of Locality Sensitive Filters for Fair Near Neighbor Search, improving a previous result in Aumüller et al. (TODS 2022).

Cite as

Martin Aumüller, Fabrizio Boninsegna, and Francesco Silvestri. Differentially Private High-Dimensional Approximate Range Counting, Revisited. In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 15:1-15:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{aumuller_et_al:LIPIcs.FORC.2025.15,
  author =	{Aum\"{u}ller, Martin and Boninsegna, Fabrizio and Silvestri, Francesco},
  title =	{{Differentially Private High-Dimensional Approximate Range Counting, Revisited}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{15:1--15:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.15},
  URN =		{urn:nbn:de:0030-drops-231426},
  doi =		{10.4230/LIPIcs.FORC.2025.15},
  annote =	{Keywords: Differential Privacy, Locality Sensitive Filters, Approximate Range Counting, Concominant Statistics}
}
Document
Improved Space-Efficient Approximate Nearest Neighbor Search Using Function Inversion

Authors: Samuel McCauley

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
Approximate nearest neighbor search (ANN) data structures have widespread applications in machine learning, computational biology, and text processing. The goal of ANN is to preprocess a set S so that, given a query q, we can find a point y whose distance from q approximates the smallest distance from q to any point in S. For most distance functions, the best-known ANN bounds for high-dimensional point sets are obtained using techniques based on locality-sensitive hashing (LSH). Unfortunately, space efficiency is a major challenge for LSH-based data structures. Classic LSH techniques require a very large amount of space, oftentimes polynomial in |S|. A long line of work has developed intricate techniques to reduce this space usage, but these techniques suffer from downsides: they must be hand tailored to each specific LSH, are often complicated, and their space reduction comes at the cost of significantly increased query times. In this paper we explore a new way to improve the space efficiency of LSH using function inversion techniques, originally developed in (Fiat and Naor 2000). We begin by describing how function inversion can be used to improve LSH data structures. This gives a fairly simple, black box method to reduce LSH space usage. Then, we give a data structure that leverages function inversion to improve the query time of the best known near-linear space data structure for approximate nearest neighbor search under Euclidean distance: the ALRW data structure of (Andoni, Laarhoven, Razenshteyn, and Waingarten 2017). ALRW was previously shown to be optimal among "list-of-points" data structures for both Euclidean and Manhattan ANN; thus, in addition to giving improved bounds, our results imply that list-of-points data structures are not optimal for Euclidean or Manhattan ANN .

Cite as

Samuel McCauley. Improved Space-Efficient Approximate Nearest Neighbor Search Using Function Inversion. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 88:1-88:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{mccauley:LIPIcs.ESA.2024.88,
  author =	{McCauley, Samuel},
  title =	{{Improved Space-Efficient Approximate Nearest Neighbor Search Using Function Inversion}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{88:1--88:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.88},
  URN =		{urn:nbn:de:0030-drops-211590},
  doi =		{10.4230/LIPIcs.ESA.2024.88},
  annote =	{Keywords: similarity search, locality-sensitive hashing, randomized algorithms, data structures, space efficiency, function inversion}
}
Document
RANDOM
Additive Noise Mechanisms for Making Randomized Approximation Algorithms Differentially Private

Authors: Jakub Tětek

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
The exponential increase in the amount of available data makes taking advantage of them without violating users' privacy one of the fundamental problems of computer science. This question has been investigated thoroughly under the framework of differential privacy. However, most of the literature has not focused on settings where the amount of data is so large that we are not even able to compute the exact answer in the non-private setting (such as in the streaming setting, sublinear-time setting, etc.). This can often make the use of differential privacy unfeasible in practice. In this paper, we show a general approach for making Monte-Carlo randomized approximation algorithms differentially private. We only need to assume the error R of the approximation algorithm is sufficiently concentrated around 0 (e.g. 𝔼[|R|] is bounded) and that the function being approximated has a small global sensitivity Δ. Specifically, if we have a randomized approximation algorithm with sufficiently concentrated error which has time/space/query complexity T(n,ρ) with ρ being an accuracy parameter, we can generally speaking get an algorithm with the same accuracy and complexity T(n,Θ(ε ρ)) that is ε-differentially private. Our technical results are as follows. First, we show that if the error is subexponential, then the Laplace mechanism with error magnitude proportional to the sum of the global sensitivity Δ and the subexponential diameter of the error of the algorithm makes the algorithm differentially private. This is true even if the worst-case global sensitivity of the algorithm is large or infinite. We then introduce a new additive noise mechanism, which we call the zero-symmetric Pareto mechanism. We show that using this mechanism, we can make an algorithm differentially private even if we only assume a bound on the first absolute moment of the error 𝔼[|R|]. Finally, we use our results to give either the first known or improved sublinear-complexity differentially private algorithms for various problems. This includes results for frequency moments, estimating the average degree of a graph in subliinear time, rank queries, or estimating the size of the maximum matching. Our results raise many new questions and we state multiple open problems.

Cite as

Jakub Tětek. Additive Noise Mechanisms for Making Randomized Approximation Algorithms Differentially Private. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 73:1-73:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{tetek:LIPIcs.APPROX/RANDOM.2024.73,
  author =	{T\v{e}tek, Jakub},
  title =	{{Additive Noise Mechanisms for Making Randomized Approximation Algorithms Differentially Private}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{73:1--73:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.73},
  URN =		{urn:nbn:de:0030-drops-210660},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.73},
  annote =	{Keywords: Differential privacy, Randomized approximation algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Optimal Non-Adaptive Cell Probe Dictionaries and Hashing

Authors: Kasper Green Larsen, Rasmus Pagh, Giuseppe Persiano, Toniann Pitassi, Kevin Yeo, and Or Zamir

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
We present a simple and provably optimal non-adaptive cell probe data structure for the static dictionary problem. Our data structure supports storing a set of n key-value pairs from [u]× [u] using s words of space and answering key lookup queries in t = O(lg(u/n)/lg(s/n)) non-adaptive probes. This generalizes a solution to the membership problem (i.e., where no values are associated with keys) due to Buhrman et al. We also present matching lower bounds for the non-adaptive static membership problem in the deterministic setting. Our lower bound implies that both our dictionary algorithm and the preceding membership algorithm are optimal, and in particular that there is an inherent complexity gap in these problems between no adaptivity and one round of adaptivity (with which hashing-based algorithms solve these problems in constant time). Using the ideas underlying our data structure, we also obtain the first implementation of a n-wise independent family of hash functions with optimal evaluation time in the cell probe model.

Cite as

Kasper Green Larsen, Rasmus Pagh, Giuseppe Persiano, Toniann Pitassi, Kevin Yeo, and Or Zamir. Optimal Non-Adaptive Cell Probe Dictionaries and Hashing. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 104:1-104:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{larsen_et_al:LIPIcs.ICALP.2024.104,
  author =	{Larsen, Kasper Green and Pagh, Rasmus and Persiano, Giuseppe and Pitassi, Toniann and Yeo, Kevin and Zamir, Or},
  title =	{{Optimal Non-Adaptive Cell Probe Dictionaries and Hashing}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{104:1--104:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.104},
  URN =		{urn:nbn:de:0030-drops-202471},
  doi =		{10.4230/LIPIcs.ICALP.2024.104},
  annote =	{Keywords: non-adaptive, cell probe, dictionary, hashing}
}
Document
Daisy Bloom Filters

Authors: Ioana O. Bercea, Jakob Bæk Tejs Houen, and Rasmus Pagh

Published in: LIPIcs, Volume 294, 19th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2024)


Abstract
A filter is a widely used data structure for storing an approximation of a given set S of elements from some universe 𝒰 (a countable set). It represents a superset S' ⊇ S that is "close to S" in the sense that for x ∉ S, the probability that x ∈ S' is bounded by some ε > 0. The advantage of using a Bloom filter, when some false positives are acceptable, is that the space usage becomes smaller than what is required to store S exactly. Though filters are well-understood from a worst-case perspective, it is clear that state-of-the-art constructions may not be close to optimal for particular distributions of data and queries. Suppose, for instance, that some elements are in S with probability close to 1. Then it would make sense to always include them in S', saving space by not having to represent these elements in the filter. Questions like this have been raised in the context of Weighted Bloom filters (Bruck, Gao and Jiang, ISIT 2006) and Bloom filter implementations that make use of access to learned components (Vaidya, Knorr, Mitzenmacher, and Krask, ICLR 2021). In this paper, we present a lower bound for the expected space that such a filter requires. We also show that the lower bound is asymptotically tight by exhibiting a filter construction that executes queries and insertions in worst-case constant time, and has a false positive rate at most ε with high probability over input sets drawn from a product distribution. We also present a Bloom filter alternative, which we call the Daisy Bloom filter, that executes operations faster and uses significantly less space than the standard Bloom filter.

Cite as

Ioana O. Bercea, Jakob Bæk Tejs Houen, and Rasmus Pagh. Daisy Bloom Filters. In 19th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 294, pp. 9:1-9:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bercea_et_al:LIPIcs.SWAT.2024.9,
  author =	{Bercea, Ioana O. and Houen, Jakob B{\ae}k Tejs and Pagh, Rasmus},
  title =	{{Daisy Bloom Filters}},
  booktitle =	{19th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2024)},
  pages =	{9:1--9:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-318-8},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{294},
  editor =	{Bodlaender, Hans L.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SWAT.2024.9},
  URN =		{urn:nbn:de:0030-drops-200491},
  doi =		{10.4230/LIPIcs.SWAT.2024.9},
  annote =	{Keywords: Bloom filters, input distribution, learned data structures}
}
Document
CG Challenge
Constructing Concise Convex Covers via Clique Covers (CG Challenge)

Authors: Mikkel Abrahamsen, William Bille Meyling, and André Nusser

Published in: LIPIcs, Volume 258, 39th International Symposium on Computational Geometry (SoCG 2023)


Abstract
This work describes the winning implementation of the CG:SHOP 2023 Challenge. The topic of the Challenge was the convex cover problem: given a polygon P (with holes), find a minimum-cardinality set of convex polygons whose union equals P. We use a three-step approach: (1) Create a suitable partition of P. (2) Compute a visibility graph of the pieces of the partition. (3) Solve a vertex clique cover problem on the visibility graph, from which we then derive the convex cover. This way we capture the geometric difficulty in the first step and the combinatorial difficulty in the third step.

Cite as

Mikkel Abrahamsen, William Bille Meyling, and André Nusser. Constructing Concise Convex Covers via Clique Covers (CG Challenge). In 39th International Symposium on Computational Geometry (SoCG 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 258, pp. 66:1-66:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{abrahamsen_et_al:LIPIcs.SoCG.2023.66,
  author =	{Abrahamsen, Mikkel and Bille Meyling, William and Nusser, Andr\'{e}},
  title =	{{Constructing Concise Convex Covers via Clique Covers}},
  booktitle =	{39th International Symposium on Computational Geometry (SoCG 2023)},
  pages =	{66:1--66:9},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-273-0},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{258},
  editor =	{Chambers, Erin W. and Gudmundsson, Joachim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2023.66},
  URN =		{urn:nbn:de:0030-drops-179164},
  doi =		{10.4230/LIPIcs.SoCG.2023.66},
  annote =	{Keywords: Convex cover, Polygons with holes, Algorithm engineering, Vertex clique cover}
}
Document
Track A: Algorithms, Complexity and Games
Approximate Triangle Counting via Sampling and Fast Matrix Multiplication

Authors: Jakub Tětek

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
There is a simple O(n³/{ε²T}) time algorithm for 1±ε-approximate triangle counting where T is the number of triangles in the graph and n the number of vertices. At the same time, one may count triangles exactly using fast matrix multiplication in time Õ(n^ω). Is it possible to get a negative dependency on the number of triangles T while retaining the state-of-the-art n^ω dependency on n? We answer this question positively by providing an algorithm which runs in time O({n^ω}/T^{ω-2})⋅poly(n^o(1)/ε). This is optimal in the sense that as long as the exponent of T is independent of n, T, it cannot be improved while retaining the dependency on n. Our algorithm improves upon the state of the art when T ≫ 1 and T ≪ n. We also consider the problem of approximate triangle counting in sparse graphs, parameterized by the number of edges m. The best known algorithm runs in time Õ_ε(m^{3/2}/T) [Eden et al., SIAM Journal on Computing, 2017]. An algorithm by Alon et al. [JACM, 1995] for exact triangle counting that runs in time Õ(m^{2ω/(ω + 1)}). We again get an algorithm whose complexity has a state-of-the-art dependency on m while having negative dependency on T. Specifically, our algorithm runs in time O({m^{2ω/(ω+1)}}/{T^{2(ω-1)/(ω+1)}}) ⋅ poly(n^o(1)/ε). This is again optimal in the sense that no better constant exponent of T is possible without worsening the dependency on m. This algorithm improves upon the state of the art when T ≫ 1 and T ≪ √m. In both cases, algorithms with time complexity matching query complexity lower bounds were known on some range of parameters. While those algorithms have optimal query complexity for the whole range of T, the time complexity departs from the query complexity and is no longer optimal (as we show) for T ≪ n and T ≪ √m, respectively. We focus on the time complexity in this range of T. To the best of our knowledge, this is the first paper considering the discrepancy between query and time complexity in graph parameter estimation.

Cite as

Jakub Tětek. Approximate Triangle Counting via Sampling and Fast Matrix Multiplication. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 107:1-107:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{tetek:LIPIcs.ICALP.2022.107,
  author =	{T\v{e}tek, Jakub},
  title =	{{Approximate Triangle Counting via Sampling and Fast Matrix Multiplication}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{107:1--107:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.107},
  URN =		{urn:nbn:de:0030-drops-164485},
  doi =		{10.4230/LIPIcs.ICALP.2022.107},
  annote =	{Keywords: Approximate triangle counting, Fast matrix multiplication, Sampling}
}
Document
Complete Volume
LIPIcs, Volume 204, ESA 2021, Complete Volume

Authors: Petra Mutzel, Rasmus Pagh, and Grzegorz Herman

Published in: LIPIcs, Volume 204, 29th Annual European Symposium on Algorithms (ESA 2021)


Abstract
LIPIcs, Volume 204, ESA 2021, Complete Volume

Cite as

29th Annual European Symposium on Algorithms (ESA 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 204, pp. 1-1340, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@Proceedings{mutzel_et_al:LIPIcs.ESA.2021,
  title =	{{LIPIcs, Volume 204, ESA 2021, Complete Volume}},
  booktitle =	{29th Annual European Symposium on Algorithms (ESA 2021)},
  pages =	{1--1340},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-204-4},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{204},
  editor =	{Mutzel, Petra and Pagh, Rasmus and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2021},
  URN =		{urn:nbn:de:0030-drops-145808},
  doi =		{10.4230/LIPIcs.ESA.2021},
  annote =	{Keywords: LIPIcs, Volume 204, ESA 2021, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Petra Mutzel, Rasmus Pagh, and Grzegorz Herman

Published in: LIPIcs, Volume 204, 29th Annual European Symposium on Algorithms (ESA 2021)


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

29th Annual European Symposium on Algorithms (ESA 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 204, pp. 0:i-0:xx, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{mutzel_et_al:LIPIcs.ESA.2021.0,
  author =	{Mutzel, Petra and Pagh, Rasmus and Herman, Grzegorz},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{29th Annual European Symposium on Algorithms (ESA 2021)},
  pages =	{0:i--0:xx},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-204-4},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{204},
  editor =	{Mutzel, Petra and Pagh, Rasmus and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2021.0},
  URN =		{urn:nbn:de:0030-drops-145816},
  doi =		{10.4230/LIPIcs.ESA.2021.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Invited Talk
Network Planning and Routing Problems over Time: Models, Complexity and Algorithms (Invited Talk)

Authors: Lukas Glomb, Benno Hoch, Frauke Liers, and Florian Rösel

Published in: LIPIcs, Volume 204, 29th Annual European Symposium on Algorithms (ESA 2021)


Abstract
In this invited contribution for ESA 2021, we will study the complexity of and algorithms for network optimization tasks with a timing component. They occur, for example, in planning or routing problems that need to be solved repeatedly over time. Typically, already simplified versions of such problems are NP-hard. In addition, the instances typically are too large to be solved straight-forwardly on a time-expanded graph. After an introduction into the area, we state the problem of determining best possible non-stop trajectories in a network that are not allowed to cross at any point in time. For simplified settings, polynomial-time solution approaches are presented whereas already for restricted settings the problems are shown to be NP-hard. When moving to more complex and more realistic settings as they occur, for example, in determining non-stop disjoint trajectories for a set of aircraft, we present heuristic algorithms that adaptively refine coarse disjoint trajectories in the timing dimension. In order to be able to solve the non-stop disjoint trajectories problem over time, the method is integrated in a rolling-horizon algorithm. We present computational results for realistic settings. Motivated by the fact that rolling-horizon approaches are often applied in practice without knowledge on the quality of the obtained solutions, we study this problem from an abstract point of view. In fact, we more abstractly analyze the solution quality of general rolling-horizon algorithms for optimization tasks that show a timing component. We apply it to different planning problems. We end by pointing out some challenges and possibilities for future research.

Cite as

Lukas Glomb, Benno Hoch, Frauke Liers, and Florian Rösel. Network Planning and Routing Problems over Time: Models, Complexity and Algorithms (Invited Talk). In 29th Annual European Symposium on Algorithms (ESA 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 204, pp. 1:1-1:3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{glomb_et_al:LIPIcs.ESA.2021.1,
  author =	{Glomb, Lukas and Hoch, Benno and Liers, Frauke and R\"{o}sel, Florian},
  title =	{{Network Planning and Routing Problems over Time: Models, Complexity and Algorithms}},
  booktitle =	{29th Annual European Symposium on Algorithms (ESA 2021)},
  pages =	{1:1--1:3},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-204-4},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{204},
  editor =	{Mutzel, Petra and Pagh, Rasmus and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2021.1},
  URN =		{urn:nbn:de:0030-drops-145822},
  doi =		{10.4230/LIPIcs.ESA.2021.1},
  annote =	{Keywords: network problems over time, rolling-horizon, complexity, approximation}
}
  • Refine by Author
  • 16 Pagh, Rasmus
  • 4 Friedrich, Tobias
  • 4 Tětek, Jakub
  • 3 Aumüller, Martin
  • 3 Bläsius, Thomas
  • Show More...

  • Refine by Classification

  • Refine by Keyword
  • 6 approximation
  • 4 differential privacy
  • 4 fixed-parameter tractability
  • 4 locality-sensitive hashing
  • 3 approximation algorithm
  • Show More...

  • Refine by Type
  • 140 document
  • 2 volume

  • Refine by Publication Year
  • 87 2021
  • 36 2016
  • 4 2024
  • 4 2025
  • 3 2017
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail