58 Search Results for "Kannan, Ravi"


Volume

LIPIcs, Volume 4

IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science

FSTTCS 2009, December 15-17, 2009, Kanpur, India

Editors: Ravi Kannan and K. Narayan Kumar

Document
Worst-Case to Expander-Case Reductions: Derandomized and Generalized

Authors: Amir Abboud and Nathan Wallheimer

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
A recent paper by Abboud and Wallheimer [ITCS 2023] presents self-reductions for various fundamental graph problems, which transform worst-case instances to expanders, thus proving that the complexity remains unchanged if the input is assumed to be an expander. An interesting corollary of their self-reductions is that if some problem admits such reduction, then the popular algorithmic paradigm based on expander-decompositions is useless against it. In this paper, we improve their core gadget, which augments a graph to make it an expander while retaining its important structure. Our new core construction has the benefit of being simple to analyze and generalize while obtaining the following results: - A derandomization of the self-reductions, showing that the equivalence between worst-case and expander-case holds even for deterministic algorithms, and ruling out the use of expander-decompositions as a derandomization tool. - An extension of the results to other models of computation, such as the Fully Dynamic model and the Congested Clique model. In the former, we either improve or provide an alternative approach to some recent hardness results for dynamic expander graphs by Henzinger, Paz, and Sricharan [ESA 2022]. In addition, we continue this line of research by designing new self-reductions for more problems, such as Max-Cut and dynamic Densest Subgraph, and demonstrating that the core gadget can be utilized to lift lower bounds based on the OMv Conjecture to expanders.

Cite as

Amir Abboud and Nathan Wallheimer. Worst-Case to Expander-Case Reductions: Derandomized and Generalized. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 4:1-4:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ESA.2024.4,
  author =	{Abboud, Amir and Wallheimer, Nathan},
  title =	{{Worst-Case to Expander-Case Reductions: Derandomized and Generalized}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{4:1--4:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.4},
  URN =		{urn:nbn:de:0030-drops-210751},
  doi =		{10.4230/LIPIcs.ESA.2024.4},
  annote =	{Keywords: Fine-grained complexity, expander graphs, self-reductions, worst-case to expander-case, expander decomposition, dynamic algorithms, exact and parameterized complexity, max-cut, maximum matching, k-clique detection, densest subgraph}
}
Document
Practical Expander Decomposition

Authors: Lars Gottesbüren, Nikos Parotsidis, and Maximilian Probst Gutenberg

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
The expander decomposition of a graph decomposes the set of vertices into clusters such that the induced subgraph of each cluster is a subgraph with high conductance, and there is only a small number of inter-cluster edges. Expander decompositions are at the forefront of recent theoretical developments in the area of efficient graph algorithms and act as a central component in several state-of-the-art graph algorithms for fundamental problems like maximum flow, min-cost flow, Gomory-Hu trees, global min-cut, and more. Despite this crucial role and the existence of theoretically efficient expander decomposition algorithms, little is known on their behavior in practice. In this paper we explore the engineering design space in implementations for computing expander decompositions. We base our implementation on the near-linear time algorithm of Saranurak and Wang [SODA'19], and enhance it with practical optimizations that accelerate its running time in practice and at the same time preserve the theoretical runtime and approximation guarantees. We evaluate our algorithm on real-world graphs with up to tens of millions of edges. We demonstrate significant speedups of up to two orders of magnitude over the only prior implementation. To the best of our knowledge, our implementation is the first to compute expander decompositions at this scale within reasonable time.

Cite as

Lars Gottesbüren, Nikos Parotsidis, and Maximilian Probst Gutenberg. Practical Expander Decomposition. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 61:1-61:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{gottesburen_et_al:LIPIcs.ESA.2024.61,
  author =	{Gottesb\"{u}ren, Lars and Parotsidis, Nikos and Gutenberg, Maximilian Probst},
  title =	{{Practical Expander Decomposition}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{61:1--61:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.61},
  URN =		{urn:nbn:de:0030-drops-211323},
  doi =		{10.4230/LIPIcs.ESA.2024.61},
  annote =	{Keywords: Expander Decomposition, Clustering, Graph Algorithms}
}
Document
RANDOM
Nearly Optimal Bounds for Sample-Based Testing and Learning of k-Monotone Functions

Authors: Hadley Black

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
We study monotonicity testing of functions f : {0,1}^d → {0,1} using sample-based algorithms, which are only allowed to observe the value of f on points drawn independently from the uniform distribution. A classic result by Bshouty-Tamon (J. ACM 1996) proved that monotone functions can be learned with exp(Õ(min{(1/ε)√d,d})) samples and it is not hard to show that this bound extends to testing. Prior to our work the only lower bound for this problem was Ω(√{exp(d)/ε}) in the small ε parameter regime, when ε = O(d^{-3/2}), due to Goldreich-Goldwasser-Lehman-Ron-Samorodnitsky (Combinatorica 2000). Thus, the sample complexity of monotonicity testing was wide open for ε ≫ d^{-3/2}. We resolve this question, obtaining a nearly tight lower bound of exp(Ω(min{(1/ε)√d,d})) for all ε at most a sufficiently small constant. In fact, we prove a much more general result, showing that the sample complexity of k-monotonicity testing and learning for functions f : {0,1}^d → [r] is exp(Ω(min{(rk/ε)√d,d})). For testing with one-sided error we show that the sample complexity is exp(Ω(d)). Beyond the hypercube, we prove nearly tight bounds (up to polylog factors of d,k,r,1/ε in the exponent) of exp(Θ̃(min{(rk/ε)√d,d})) on the sample complexity of testing and learning measurable k-monotone functions f : ℝ^d → [r] under product distributions. Our upper bound improves upon the previous bound of exp(Õ(min{(k/ε²)√d,d})) by Harms-Yoshida (ICALP 2022) for Boolean functions (r = 2).

Cite as

Hadley Black. Nearly Optimal Bounds for Sample-Based Testing and Learning of k-Monotone Functions. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 37:1-37:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{black:LIPIcs.APPROX/RANDOM.2024.37,
  author =	{Black, Hadley},
  title =	{{Nearly Optimal Bounds for Sample-Based Testing and Learning of k-Monotone Functions}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{37:1--37:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.37},
  URN =		{urn:nbn:de:0030-drops-210308},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.37},
  annote =	{Keywords: Property testing, learning, Boolean functions, monotonicity, k-monotonicity}
}
Document
Polynomial Pass Semi-Streaming Lower Bounds for K-Cores and Degeneracy

Authors: Sepehr Assadi, Prantar Ghosh, Bruno Loff, Parth Mittal, and Sagnik Mukhopadhyay

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
The following question arises naturally in the study of graph streaming algorithms: Is there any graph problem which is "not too hard", in that it can be solved efficiently with total communication (nearly) linear in the number n of vertices, and for which, nonetheless, any streaming algorithm with Õ(n) space (i.e., a semi-streaming algorithm) needs a polynomial n^Ω(1) number of passes? Assadi, Chen, and Khanna [STOC 2019] were the first to prove that this is indeed the case. However, the lower bounds that they obtained are for rather non-standard graph problems. Our first main contribution is to present the first polynomial-pass lower bounds for natural "not too hard" graph problems studied previously in the streaming model: k-cores and degeneracy. We devise a novel communication protocol for both problems with near-linear communication, thus showing that k-cores and degeneracy are natural examples of "not too hard" problems. Indeed, previous work have developed single-pass semi-streaming algorithms for approximating these problems. In contrast, we prove that any semi-streaming algorithm for exactly solving these problems requires (almost) Ω(n^{1/3}) passes. The lower bound follows by a reduction from a generalization of the hidden pointer chasing (HPC) problem of Assadi, Chen, and Khanna, which is also the basis of their earlier semi-streaming lower bounds. Our second main contribution is improved round-communication lower bounds for the underlying communication problems at the basis of these reductions: - We improve the previous lower bound of Assadi, Chen, and Khanna for HPC to achieve optimal bounds for this problem. - We further observe that all current reductions from HPC can also work with a generalized version of this problem that we call MultiHPC, and prove an even stronger and optimal lower bound for this generalization. These two results collectively allow us to improve the resulting pass lower bounds for semi-streaming algorithms by a polynomial factor, namely, from n^{1/5} to n^{1/3} passes.

Cite as

Sepehr Assadi, Prantar Ghosh, Bruno Loff, Parth Mittal, and Sagnik Mukhopadhyay. Polynomial Pass Semi-Streaming Lower Bounds for K-Cores and Degeneracy. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 7:1-7:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{assadi_et_al:LIPIcs.CCC.2024.7,
  author =	{Assadi, Sepehr and Ghosh, Prantar and Loff, Bruno and Mittal, Parth and Mukhopadhyay, Sagnik},
  title =	{{Polynomial Pass Semi-Streaming Lower Bounds for K-Cores and Degeneracy}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{7:1--7:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.7},
  URN =		{urn:nbn:de:0030-drops-204035},
  doi =		{10.4230/LIPIcs.CCC.2024.7},
  annote =	{Keywords: Graph streaming, Lower bounds, Communication complexity, k-Cores and degeneracy}
}
Document
Complexity of Robust Orbit Problems for Torus Actions and the abc-Conjecture

Authors: Peter Bürgisser, Mahmut Levent Doğan, Visu Makam, Michael Walter, and Avi Wigderson

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
When a group acts on a set, it naturally partitions it into orbits, giving rise to orbit problems. These are natural algorithmic problems, as symmetries are central in numerous questions and structures in physics, mathematics, computer science, optimization, and more. Accordingly, it is of high interest to understand their computational complexity. Recently, Bürgisser et al. (2021) gave the first polynomial-time algorithms for orbit problems of torus actions, that is, actions of commutative continuous groups on Euclidean space. In this work, motivated by theoretical and practical applications, we study the computational complexity of robust generalizations of these orbit problems, which amount to approximating the distance of orbits in ℂⁿ up to a factor γ ≥ 1. In particular, this allows deciding whether two inputs are approximately in the same orbit or far from being so. On the one hand, we prove the NP-hardness of this problem for γ = n^Ω(1/log log n) by reducing the closest vector problem for lattices to it. On the other hand, we describe algorithms for solving this problem for an approximation factor γ = exp(poly(n)). Our algorithms combine tools from invariant theory and algorithmic lattice theory, and they also provide group elements witnessing the proximity of the given orbits (in contrast to the algebraic algorithms of prior work). We prove that they run in polynomial time if and only if a version of the famous number-theoretic abc-conjecture holds - establishing a new and surprising connection between computational complexity and number theory.

Cite as

Peter Bürgisser, Mahmut Levent Doğan, Visu Makam, Michael Walter, and Avi Wigderson. Complexity of Robust Orbit Problems for Torus Actions and the abc-Conjecture. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 14:1-14:48, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{burgisser_et_al:LIPIcs.CCC.2024.14,
  author =	{B\"{u}rgisser, Peter and Do\u{g}an, Mahmut Levent and Makam, Visu and Walter, Michael and Wigderson, Avi},
  title =	{{Complexity of Robust Orbit Problems for Torus Actions and the abc-Conjecture}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{14:1--14:48},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.14},
  URN =		{urn:nbn:de:0030-drops-204100},
  doi =		{10.4230/LIPIcs.CCC.2024.14},
  annote =	{Keywords: computational invariant theory, geometric complexity theory, orbit problems, abc-conjecture, closest vector problem}
}
Document
Local Enumeration and Majority Lower Bounds

Authors: Mohit Gurumukhani, Ramamohan Paturi, Pavel Pudlák, Michael Saks, and Navid Talebanfard

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
Depth-3 circuit lower bounds and k-SAT algorithms are intimately related; the state-of-the-art Σ^k_3-circuit lower bound (Or-And-Or circuits with bottom fan-in at most k) and the k-SAT algorithm of Paturi, Pudlák, Saks, and Zane (J. ACM'05) are based on the same combinatorial theorem regarding k-CNFs. In this paper we define a problem which reveals new interactions between the two, and suggests a concrete approach to significantly stronger circuit lower bounds and improved k-SAT algorithms. For a natural number k and a parameter t, we consider the Enum(k, t) problem defined as follows: given an n-variable k-CNF and an initial assignment α, output all satisfying assignments at Hamming distance t(n) of α, assuming that there are no satisfying assignments of Hamming distance less than t(n) of α. We observe that an upper bound b(n, k, t) on the complexity of Enum(k, t) simultaneously implies depth-3 circuit lower bounds and k-SAT algorithms: - Depth-3 circuits: Any Σ^k_3 circuit computing the Majority function has size at least binom(n,n/2)/b(n, k, n/2). - k-SAT: There exists an algorithm solving k-SAT in time O(∑_{t=1}^{n/2}b(n, k, t)). A simple construction shows that b(n, k, n/2) ≥ 2^{(1 - O(log(k)/k))n}. Thus, matching upper bounds for b(n, k, n/2) would imply a Σ^k_3-circuit lower bound of 2^Ω(log(k)n/k) and a k-SAT upper bound of 2^{(1 - Ω(log(k)/k))n}. The former yields an unrestricted depth-3 lower bound of 2^ω(√n) solving a long standing open problem, and the latter breaks the Super Strong Exponential Time Hypothesis. In this paper, we propose a randomized algorithm for Enum(k, t) and introduce new ideas to analyze it. We demonstrate the power of our ideas by considering the first non-trivial instance of the problem, i.e., Enum(3, n/2). We show that the expected running time of our algorithm is 1.598ⁿ, substantially improving on the trivial bound of 3^{n/2} ≃ 1.732ⁿ. This already improves Σ^3_3 lower bounds for Majority function to 1.251ⁿ. The previous bound was 1.154ⁿ which follows from the work of Håstad, Jukna, and Pudlák (Comput. Complex.'95). By restricting ourselves to monotone CNFs, Enum(k, t) immediately becomes a hypergraph Turán problem. Therefore our techniques might be of independent interest in extremal combinatorics.

Cite as

Mohit Gurumukhani, Ramamohan Paturi, Pavel Pudlák, Michael Saks, and Navid Talebanfard. Local Enumeration and Majority Lower Bounds. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 17:1-17:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{gurumukhani_et_al:LIPIcs.CCC.2024.17,
  author =	{Gurumukhani, Mohit and Paturi, Ramamohan and Pudl\'{a}k, Pavel and Saks, Michael and Talebanfard, Navid},
  title =	{{Local Enumeration and Majority Lower Bounds}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{17:1--17:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.17},
  URN =		{urn:nbn:de:0030-drops-204136},
  doi =		{10.4230/LIPIcs.CCC.2024.17},
  annote =	{Keywords: Depth 3 circuits, k-CNF satisfiability, Circuit lower bounds, Majority function}
}
Document
Track A: Algorithms, Complexity and Games
On the Streaming Complexity of Expander Decomposition

Authors: Yu Chen, Michael Kapralov, Mikhail Makarov, and Davide Mazzali

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
In this paper we study the problem of finding (ε, ϕ)-expander decompositions of a graph in the streaming model, in particular for dynamic streams of edge insertions and deletions. The goal is to partition the vertex set so that every component induces a ϕ-expander, while the number of inter-cluster edges is only an ε fraction of the total volume. It was recently shown that there exists a simple algorithm to construct a (O(ϕ log n), ϕ)-expander decomposition of an n-vertex graph using Õ(n/ϕ²) bits of space [Filtser, Kapralov, Makarov, ITCS'23]. This result calls for understanding the extent to which a dependence in space on the sparsity parameter ϕ is inherent. We move towards answering this question on two fronts. We prove that a (O(ϕ log n), ϕ)-expander decomposition can be found using Õ(n) space, for every ϕ. At the core of our result is the first streaming algorithm for computing boundary-linked expander decompositions, a recently introduced strengthening of the classical notion [Goranci et al., SODA'21]. The key advantage is that a classical sparsifier [Fung et al., STOC'11], with size independent of ϕ, preserves the cuts inside the clusters of a boundary-linked expander decomposition within a multiplicative error. Notable algorithmic applications use sequences of expander decompositions, in particular one often repeatedly computes a decomposition of the subgraph induced by the inter-cluster edges (e.g., the seminal work of Spielman and Teng on spectral sparsifiers [Spielman, Teng, SIAM Journal of Computing 40(4)], or the recent maximum flow breakthrough [Chen et al., FOCS'22], among others). We prove that any streaming algorithm that computes a sequence of (O(ϕ log n), ϕ)-expander decompositions requires Ω̃(n/ϕ) bits of space, even in insertion only streams.

Cite as

Yu Chen, Michael Kapralov, Mikhail Makarov, and Davide Mazzali. On the Streaming Complexity of Expander Decomposition. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 46:1-46:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.ICALP.2024.46,
  author =	{Chen, Yu and Kapralov, Michael and Makarov, Mikhail and Mazzali, Davide},
  title =	{{On the Streaming Complexity of Expander Decomposition}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{46:1--46:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.46},
  URN =		{urn:nbn:de:0030-drops-201890},
  doi =		{10.4230/LIPIcs.ICALP.2024.46},
  annote =	{Keywords: Graph Sketching, Dynamic Streaming, Expander Decomposition}
}
Document
Track A: Algorithms, Complexity and Games
Fully-Scalable MPC Algorithms for Clustering in High Dimension

Authors: Artur Czumaj, Guichen Gao, Shaofeng H.-C. Jiang, Robert Krauthgamer, and Pavel Veselý

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
We design new parallel algorithms for clustering in high-dimensional Euclidean spaces. These algorithms run in the Massively Parallel Computation (MPC) model, and are fully scalable, meaning that the local memory in each machine may be n^σ for arbitrarily small fixed σ > 0. Importantly, the local memory may be substantially smaller than the number of clusters k, yet all our algorithms are fast, i.e., run in O(1) rounds. We first devise a fast MPC algorithm for O(1)-approximation of uniform Facility Location. This is the first fully-scalable MPC algorithm that achieves O(1)-approximation for any clustering problem in general geometric setting; previous algorithms only provide poly(log n)-approximation or apply to restricted inputs, like low dimension or small number of clusters k; e.g. [Bhaskara and Wijewardena, ICML'18; Cohen-Addad et al., NeurIPS'21; Cohen-Addad et al., ICML'22]. We then build on this Facility Location result and devise a fast MPC algorithm that achieves O(1)-bicriteria approximation for k-Median and for k-Means, namely, it computes (1+ε)k clusters of cost within O(1/ε²)-factor of the optimum for k clusters. A primary technical tool that we introduce, and may be of independent interest, is a new MPC primitive for geometric aggregation, namely, computing for every data point a statistic of its approximate neighborhood, for statistics like range counting and nearest-neighbor search. Our implementation of this primitive works in high dimension, and is based on consistent hashing (aka sparse partition), a technique that was recently used for streaming algorithms [Czumaj et al., FOCS'22].

Cite as

Artur Czumaj, Guichen Gao, Shaofeng H.-C. Jiang, Robert Krauthgamer, and Pavel Veselý. Fully-Scalable MPC Algorithms for Clustering in High Dimension. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 50:1-50:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{czumaj_et_al:LIPIcs.ICALP.2024.50,
  author =	{Czumaj, Artur and Gao, Guichen and Jiang, Shaofeng H.-C. and Krauthgamer, Robert and Vesel\'{y}, Pavel},
  title =	{{Fully-Scalable MPC Algorithms for Clustering in High Dimension}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{50:1--50:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.50},
  URN =		{urn:nbn:de:0030-drops-201938},
  doi =		{10.4230/LIPIcs.ICALP.2024.50},
  annote =	{Keywords: Massively parallel computing, high dimension, facility location, k-median, k-means}
}
Document
Track A: Algorithms, Complexity and Games
Simultaneously Approximating All 𝓁_p-Norms in Correlation Clustering

Authors: Sami Davies, Benjamin Moseley, and Heather Newman

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
This paper considers correlation clustering on unweighted complete graphs. We give a combinatorial algorithm that returns a single clustering solution that is simultaneously O(1)-approximate for all 𝓁_p-norms of the disagreement vector; in other words, a combinatorial O(1)-approximation of the all-norms objective for correlation clustering. This is the first proof that minimal sacrifice is needed in order to optimize different norms of the disagreement vector. In addition, our algorithm is the first combinatorial approximation algorithm for the 𝓁₂-norm objective, and more generally the first combinatorial algorithm for the 𝓁_p-norm objective when 1 < p < ∞. It is also faster than all previous algorithms that minimize the 𝓁_p-norm of the disagreement vector, with run-time O(n^ω), where O(n^ω) is the time for matrix multiplication on n × n matrices. When the maximum positive degree in the graph is at most Δ, this can be improved to a run-time of O(nΔ² log n).

Cite as

Sami Davies, Benjamin Moseley, and Heather Newman. Simultaneously Approximating All 𝓁_p-Norms in Correlation Clustering. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 52:1-52:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{davies_et_al:LIPIcs.ICALP.2024.52,
  author =	{Davies, Sami and Moseley, Benjamin and Newman, Heather},
  title =	{{Simultaneously Approximating All 𝓁\underlinep-Norms in Correlation Clustering}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{52:1--52:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.52},
  URN =		{urn:nbn:de:0030-drops-201950},
  doi =		{10.4230/LIPIcs.ICALP.2024.52},
  annote =	{Keywords: Approximation algorithms, correlation clustering, all-norms, lp-norms}
}
Document
Track A: Algorithms, Complexity and Games
Random Separating Hyperplane Theorem and Learning Polytopes

Authors: Chiranjib Bhattacharyya, Ravindran Kannan, and Amit Kumar

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
The Separating Hyperplane theorem is a fundamental result in Convex Geometry with myriad applications. The theorem asserts that for a point a not in a closed convex set K, there is a hyperplane with K on one side and a strictly on the other side. Our first result, Random Separating Hyperplane Theorem (RSH), is a strengthening of this for polytopes. RSH asserts that if the distance between a and a polytope K with k vertices and unit diameter in ℜ^d is at least δ, where δ is a fixed constant in (0,1), then a randomly chosen hyperplane separates a and K with probability at least 1/poly(k) and margin at least Ω (δ/√d). RSH has algorithmic applications in learning polytopes. We consider a fundamental problem, denoted the "Hausdorff problem", of learning a unit diameter polytope K within Hausdorff distance δ, given an optimization oracle for K. Using RSH, we show that with polynomially many random queries to the optimization oracle, K can be approximated within error O(δ). To our knowledge, this is the first provable algorithm for the Hausdorff Problem in this setting. Building on this result, we show that if the vertices of K are well-separated, then an optimization oracle can be used to generate a list of points, each within distance O(δ) of K, with the property that the list contains a point close to each vertex of K. Further, we show how to prune this list to generate a (unique) approximation to each vertex of the polytope. We prove that in many latent variable settings, e.g., topic modeling, LDA, optimization oracles do exist provided we project to a suitable SVD subspace. Thus, our work yields the first efficient algorithm for finding approximations to the vertices of the latent polytope under the well-separatedness assumption. This assumption states that each vertex of K is far from the convex hull of the remaining vertices of K, and is much weaker than other assumptions behind algorithms in the literature which find vertices of the latent polytope.

Cite as

Chiranjib Bhattacharyya, Ravindran Kannan, and Amit Kumar. Random Separating Hyperplane Theorem and Learning Polytopes. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 25:1-25:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bhattacharyya_et_al:LIPIcs.ICALP.2024.25,
  author =	{Bhattacharyya, Chiranjib and Kannan, Ravindran and Kumar, Amit},
  title =	{{Random Separating Hyperplane Theorem and Learning Polytopes}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{25:1--25:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.25},
  URN =		{urn:nbn:de:0030-drops-201687},
  doi =		{10.4230/LIPIcs.ICALP.2024.25},
  annote =	{Keywords: Separating Hyperplane Theorem, Learning Polytopes, Optimization Oracles}
}
Document
Track A: Algorithms, Complexity and Games
Streaming Algorithms for Connectivity Augmentation

Authors: Ce Jin, Michael Kapralov, Sepideh Mahabadi, and Ali Vakilian

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
We study the k-connectivity augmentation problem (k-CAP) in the single-pass streaming model. Given a (k-1)-edge connected graph G = (V,E) that is stored in memory, and a stream of weighted edges (also called links) L with weights in {0,1,… ,W}, the goal is to choose a minimum weight subset L' ⊆ L of the links such that G' = (V,E∪ L') is k-edge connected. We give a (2+ε)-approximation algorithm for this problem which requires to store O(ε^{-1} nlog n) words. Moreover, we show the tightness of our result: Any algorithm with better than 2-approximation for the problem requires Ω(n²) bits of space even when k = 2. This establishes a gap between the optimal approximation factor one can obtain in the streaming vs the offline setting for k-CAP. We further consider a natural generalization to the fully streaming model where both E and L arrive in the stream in an arbitrary order. We show that this problem has a space lower bound that matches the best possible size of a spanner of the same approximation ratio. Following this, we give improved results for spanners on weighted graphs: We show a streaming algorithm that finds a (2t-1+ε)-approximate weighted spanner of size at most O(ε^{-1} n^{1+1/t}log n) for integer t, whereas the best prior streaming algorithm for spanner on weighted graphs had size depending on log W. We believe that this result is of independent interest. Using our spanner result, we provide an optimal O(t)-approximation for k-CAP in the fully streaming model with O(nk + n^{1+1/t}) words of space. Finally we apply our results to network design problems such as Steiner tree augmentation problem (STAP), k-edge connected spanning subgraph (k-ECSS) and the general Survivable Network Design problem (SNDP). In particular, we show a single-pass O(tlog k)-approximation for SNDP using O(kn^{1+1/t}) words of space, where k is the maximum connectivity requirement.

Cite as

Ce Jin, Michael Kapralov, Sepideh Mahabadi, and Ali Vakilian. Streaming Algorithms for Connectivity Augmentation. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 93:1-93:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{jin_et_al:LIPIcs.ICALP.2024.93,
  author =	{Jin, Ce and Kapralov, Michael and Mahabadi, Sepideh and Vakilian, Ali},
  title =	{{Streaming Algorithms for Connectivity Augmentation}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{93:1--93:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.93},
  URN =		{urn:nbn:de:0030-drops-202367},
  doi =		{10.4230/LIPIcs.ICALP.2024.93},
  annote =	{Keywords: streaming algorithms, connectivity augmentation}
}
Document
Track A: Algorithms, Complexity and Games
No Polynomial Kernels for Knapsack

Authors: Klaus Heeger, Danny Hermelin, Matthias Mnich, and Dvir Shabtay

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
This paper focuses on kernelization algorithms for the fundamental Knapsack problem. A kernelization algorithm (or kernel) is a polynomial-time reduction from a problem onto itself, where the output size is bounded by a function of some problem-specific parameter. Such algorithms provide a theoretical model for data reduction and preprocessing and are central in the area of parameterized complexity. In this way, a kernel for Knapsack for some parameter k reduces any instance of Knapsack to an equivalent instance of size at most f(k) in polynomial time, for some computable function f. When f(k) = k^{O(1)} then we call such a reduction a polynomial kernel. Our study focuses on two natural parameters for Knapsack: The number w_# of different item weights, and the number p_# of different item profits. Our main technical contribution is a proof showing that Knapsack does not admit a polynomial kernel for any of these two parameters under standard complexity-theoretic assumptions. Our proof discovers an elaborate application of the standard kernelization lower bound framework, and develops along the way novel ideas that should be useful for other problems as well. We complement our lower bounds by showing that Knapsack admits a polynomial kernel for the combined parameter w_# ⋅ p_#.

Cite as

Klaus Heeger, Danny Hermelin, Matthias Mnich, and Dvir Shabtay. No Polynomial Kernels for Knapsack. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 83:1-83:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{heeger_et_al:LIPIcs.ICALP.2024.83,
  author =	{Heeger, Klaus and Hermelin, Danny and Mnich, Matthias and Shabtay, Dvir},
  title =	{{No Polynomial Kernels for Knapsack}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{83:1--83:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.83},
  URN =		{urn:nbn:de:0030-drops-202261},
  doi =		{10.4230/LIPIcs.ICALP.2024.83},
  annote =	{Keywords: Knapsack, polynomial kernels, compositions, number of different weights, number of different profits}
}
Document
Track A: Algorithms, Complexity and Games
Parameterized Approximation For Robust Clustering in Discrete Geometric Spaces

Authors: Fateme Abbasi, Sandip Banerjee, Jarosław Byrka, Parinya Chalermsook, Ameet Gadekar, Kamyar Khodamoradi, Dániel Marx, Roohani Sharma, and Joachim Spoerhase

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
We consider the well-studied Robust (k,z)-Clustering problem, which generalizes the classic k-Median, k-Means, and k-Center problems and arises in the domains of robust optimization [Anthony, Goyal, Gupta, Nagarajan, Math. Oper. Res. 2010] and in algorithmic fairness [Abbasi, Bhaskara, Venkatasubramanian, 2021 & Ghadiri, Samadi, Vempala, 2022]. Given a constant z ≥ 1, the input to Robust (k,z)-Clustering is a set P of n points in a metric space (M,δ), a weight function w: P → ℝ_{≥ 0} and a positive integer k. Further, each point belongs to one (or more) of the m many different groups S_1,S_2,…,S_m ⊆ P. Our goal is to find a set X of k centers such that max_{i ∈ [m]} ∑_{p ∈ S_i} w(p) δ(p,X)^z is minimized. Complementing recent work on this problem, we give a comprehensive understanding of the parameterized approximability of the problem in geometric spaces where the parameter is the number k of centers. We prove the following results: [(i)] 1) For a universal constant η₀ > 0.0006, we devise a 3^z(1-η₀)-factor FPT approximation algorithm for Robust (k,z)-Clustering in discrete high-dimensional Euclidean spaces where the set of potential centers is finite. This shows that the lower bound of 3^z for general metrics [Goyal, Jaiswal, Inf. Proc. Letters, 2023] no longer holds when the metric has geometric structure. 2) We show that Robust (k,z)-Clustering in discrete Euclidean spaces is (√{3/2}- o(1))-hard to approximate for FPT algorithms, even if we consider the special case k-Center in logarithmic dimensions. This rules out a (1+ε)-approximation algorithm running in time f(k,ε)poly(m,n) (also called efficient parameterized approximation scheme or EPAS), giving a striking contrast with the recent EPAS for the continuous setting where centers can be placed anywhere in the space [Abbasi et al., FOCS'23]. 3) However, we obtain an EPAS for Robust (k,z)-Clustering in discrete Euclidean spaces when the dimension is sublogarithmic (for the discrete problem, earlier work [Abbasi et al., FOCS'23] provides an EPAS only in dimension o(log log n)). Our EPAS works also for metrics of sub-logarithmic doubling dimension.

Cite as

Fateme Abbasi, Sandip Banerjee, Jarosław Byrka, Parinya Chalermsook, Ameet Gadekar, Kamyar Khodamoradi, Dániel Marx, Roohani Sharma, and Joachim Spoerhase. Parameterized Approximation For Robust Clustering in Discrete Geometric Spaces. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 6:1-6:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{abbasi_et_al:LIPIcs.ICALP.2024.6,
  author =	{Abbasi, Fateme and Banerjee, Sandip and Byrka, Jaros{\l}aw and Chalermsook, Parinya and Gadekar, Ameet and Khodamoradi, Kamyar and Marx, D\'{a}niel and Sharma, Roohani and Spoerhase, Joachim},
  title =	{{Parameterized Approximation For Robust Clustering in Discrete Geometric Spaces}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{6:1--6:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.6},
  URN =		{urn:nbn:de:0030-drops-201494},
  doi =		{10.4230/LIPIcs.ICALP.2024.6},
  annote =	{Keywords: Clustering, approximation algorithms, parameterized complexity}
}
Document
Bit Complexity of Jordan Normal Form and Polynomial Spectral Factorization

Authors: Papri Dey, Ravi Kannan, Nick Ryder, and Nikhil Srivastava

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
We study the bit complexity of two related fundamental computational problems in linear algebra and control theory. Our results are: (1) An Õ(n^{ω+3}a+n⁴a²+n^ωlog(1/ε)) time algorithm for finding an ε-approximation to the Jordan Normal form of an integer matrix with a-bit entries, where ω is the exponent of matrix multiplication. (2) An Õ(n⁶d⁶a+n⁴d⁴a²+n³d³log(1/ε)) time algorithm for ε-approximately computing the spectral factorization P(x) = Q^*(x)Q(x) of a given monic n× n rational matrix polynomial of degree 2d with rational a-bit coefficients having a-bit common denominators, which satisfies P(x)⪰0 for all real x. The first algorithm is used as a subroutine in the second one. Despite its being of central importance, polynomial complexity bounds were not previously known for spectral factorization, and for Jordan form the best previous best running time was an unspecified polynomial in n of degree at least twelve [Cai, 1994]. Our algorithms are simple and judiciously combine techniques from numerical and symbolic computation, yielding significant advantages over either approach by itself.

Cite as

Papri Dey, Ravi Kannan, Nick Ryder, and Nikhil Srivastava. Bit Complexity of Jordan Normal Form and Polynomial Spectral Factorization. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 42:1-42:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{dey_et_al:LIPIcs.ITCS.2023.42,
  author =	{Dey, Papri and Kannan, Ravi and Ryder, Nick and Srivastava, Nikhil},
  title =	{{Bit Complexity of Jordan Normal Form and Polynomial Spectral Factorization}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{42:1--42:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.42},
  URN =		{urn:nbn:de:0030-drops-175450},
  doi =		{10.4230/LIPIcs.ITCS.2023.42},
  annote =	{Keywords: Symbolic algorithms, numerical algorithms, linear algebra}
}
  • Refine by Author
  • 4 Kannan, Ravi
  • 2 Brazdil, Tomas
  • 2 Dawar, Anuj
  • 2 Fomin, Fedor V.
  • 2 Kapralov, Michael
  • Show More...

  • Refine by Classification
  • 3 Theory of computation → Graph algorithms analysis
  • 3 Theory of computation → Streaming, sublinear and near linear time algorithms
  • 2 Theory of computation → Approximation algorithms analysis
  • 2 Theory of computation → Facility location and clustering
  • 2 Theory of computation → Lower bounds and information complexity
  • Show More...

  • Refine by Keyword
  • 2 Buchi Automata
  • 2 Clustering
  • 2 Communication complexity
  • 2 Expander Decomposition
  • 2 Petri nets
  • Show More...

  • Refine by Type
  • 57 document
  • 1 volume

  • Refine by Publication Year
  • 41 2009
  • 13 2024
  • 1 2008
  • 1 2013
  • 1 2020
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail