8 Search Results for "Deshpande, Amit"


Document
APPROX
On Instance-Optimal Algorithms for a Generalization of Nuts and Bolts and Generalized Sorting

Authors: Mayank Goswami and Riko Jacob

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
We generalize the classical nuts and bolts problem to a setting where the input is a collection of n nuts and m bolts, and there is no promise of any matching pairs. It is not allowed to compare a nut directly with a nut or a bolt directly with a bolt, and the goal is to perform the fewest nut-bolt comparisons to discover the partial order between the nuts and bolts. We term this problem bipartite sorting. We show that instances of bipartite sorting of the same size exhibit a wide range of complexity, and propose to perform a fine-grained analysis for this problem. We rule out straightforward notions of instance-optimality as being too stringent, and adopt a neighborhood-based definition. Our definition may be of independent interest as a unifying lens for instance-optimal algorithms for other static problems existing in literature. This includes problems like sorting (Estivill-Castro and Woods, ACM Comput. Surv. 1992), convex hull (Afshani, Barbay and Chan, JACM 2017), adaptive joins (Demaine, López-Ortiz and Munro, SODA 2000), and the recent concept of universal optimality for graphs (Haeupler, Hladík, Rozhoň, Tarjan and Tětek, 2023). As our main result on bipartite sorting, we give a randomized algorithm that is within a factor of O(log³(n+m)) of being instance-optimal w.h.p., with respect to the neighborhood-based definition. As our second contribution, we generalize bipartite sorting to DAG sorting, when the underlying DAG is not necessarily bipartite. As an unexpected consequence of a simple algorithm for DAG sorting, we rule out a potential lower bound on the widely-studied problem of sorting with priced information, posed by (Charikar, Fagin, Guruswami, Kleinberg, Raghavan and Sahai, STOC 2000). In this problem, comparing keys i and j has a known cost c_{ij} ∈ ℝ^+ ∪ {∞}, and the goal is to sort the keys in an instance-optimal way, by keeping the total cost of an algorithm as close as possible to ∑_{i=1}^{n-1} c_{x(i)x(i+1)}. Here x(1) < ⋯ < x(n) is the sorted order. While several special cases of cost functions have received a lot of attention in the community, no progress on the general version with arbitrary costs has been reported so far. One reason for this lack of progress seems to be a widely-cited Ω(n) lower bound on the competitive ratio for finding the maximum. This Ω(n) lower bound by (Gupta and Kumar, FOCS 2000) uses costs in {0,1,n, ∞}, and although not extended to sorting, this barrier seems to have stalled any progress on the general cost case. We rule out such a potential lower bound by showing the existence of an algorithm with a Õ(n^{3/4}) competitive ratio for the {0,1,n,∞} cost version. This generalizes the setting of generalized sorting proposed by (Huang, Kannan and Khanna, FOCS 2011), where the costs are either 1 or infinity, and the cost of the cheapest proof is always n-1.

Cite as

Mayank Goswami and Riko Jacob. On Instance-Optimal Algorithms for a Generalization of Nuts and Bolts and Generalized Sorting. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 23:1-23:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{goswami_et_al:LIPIcs.APPROX/RANDOM.2024.23,
  author =	{Goswami, Mayank and Jacob, Riko},
  title =	{{On Instance-Optimal Algorithms for a Generalization of Nuts and Bolts and Generalized Sorting}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{23:1--23:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.23},
  URN =		{urn:nbn:de:0030-drops-210168},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.23},
  annote =	{Keywords: Sorting, Priced Information, Instance Optimality, Nuts and Bolts}
}
Document
APPROX
Approximation Algorithms for Correlated Knapsack Orienteering

Authors: David Alemán Espinosa and Chaitanya Swamy

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
We consider the correlated knapsack orienteering (CorrKO) problem: we are given a travel budget B, processing-time budget W, finite metric space (V,d) with root ρ ∈ V, where each vertex is associated with a job with possibly correlated random size and random reward that become known only when the job completes. Random variables are independent across different vertices. The goal is to compute a ρ-rooted path of length at most B, in a possibly adaptive fashion, that maximizes the reward collected from jobs that processed by time W. To our knowledge, CorrKO has not been considered before, though prior work has considered the uncorrelated problem, stochastic knapsack orienteering, and correlated orienteering, which features only one budget constraint on the sum of travel-time and processing-times. Gupta et al. [Gupta et al., 2015] showed that the uncorrelated version of this problem has a constant-factor adaptivity gap. We show that, perhaps surprisingly and in stark contrast to the uncorrelated problem, the adaptivity gap of CorrKO is is at least Ω(max{√log(B),√(log log(W))}). Complementing this result, we devise non-adaptive algorithms that obtain: (a) O(log log W)-approximation in quasi-polytime; and (b) O(log W)-approximation in polytime. This also establishes that the adaptivity gap for CorrKO is at most O(log log W). We obtain similar guarantees for CorrKO with cancellations, wherein a job can be cancelled before its completion time, foregoing its reward. We show that an α-approximation for CorrKO implies an O(α)-approximation for CorrKO with cancellations. We also consider the special case of CorrKO where job sizes are weighted Bernoulli distributions, and more generally where the distributions are supported on at most two points (2CorrKO). Although weighted Bernoulli distributions suffice to yield an Ω(√{log log B}) adaptivity-gap lower bound for (uncorrelated) stochastic orienteering, we show that they are easy instances for CorrKO. We develop non-adaptive algorithms that achieve O(1)-approximation, in polytime for weighted Bernoulli distributions, and in (n+log B)^O(log W)-time for 2CorrKO. (Thus, our adaptivity-gap lower-bound example, which uses distributions of support-size 3, is tight in terms of support-size of the distributions.) Finally, we leverage our techniques to provide a quasi-polynomial time O(log log B) approximation algorithm for correlated orienteering improving upon the approximation guarantee in [Bansal and Nagarajan, 2015].

Cite as

David Alemán Espinosa and Chaitanya Swamy. Approximation Algorithms for Correlated Knapsack Orienteering. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 29:1-29:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{alemanespinosa_et_al:LIPIcs.APPROX/RANDOM.2024.29,
  author =	{Alem\'{a}n Espinosa, David and Swamy, Chaitanya},
  title =	{{Approximation Algorithms for Correlated Knapsack Orienteering}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{29:1--29:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.29},
  URN =		{urn:nbn:de:0030-drops-210224},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.29},
  annote =	{Keywords: Approximation algorithms, Stochastic orienteering, Adaptivity gap, Vehicle routing problems, LP rounding algorithms}
}
Document
Faster Approximation Schemes for (Constrained) k-Means with Outliers

Authors: Zhen Zhang, Junyu Huang, and Qilong Feng

Published in: LIPIcs, Volume 306, 49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024)


Abstract
Given a set of n points in ℝ^d and two positive integers k and m, the Euclidean k-means with outliers problem aims to remove at most m points, referred to as outliers, and minimize the k-means cost function for the remaining points. Developing algorithms for this problem remains an active area of research due to its prevalence in applications involving noisy data. In this paper, we give a (1+ε)-approximation algorithm that runs in n²d((k+m)ε^{-1})^O(kε^{-1}) time for the problem. When combined with a coreset construction method, the running time of the algorithm can be improved to be linear in n. For the case where k is a constant, this represents the first polynomial-time approximation scheme for the problem: Existing algorithms with the same approximation guarantee run in polynomial time only when both k and m are constants. Furthermore, our approach generalizes to variants of k-means with outliers incorporating additional constraints on instances, such as those related to capacities and fairness.

Cite as

Zhen Zhang, Junyu Huang, and Qilong Feng. Faster Approximation Schemes for (Constrained) k-Means with Outliers. In 49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 306, pp. 84:1-84:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{zhang_et_al:LIPIcs.MFCS.2024.84,
  author =	{Zhang, Zhen and Huang, Junyu and Feng, Qilong},
  title =	{{Faster Approximation Schemes for (Constrained) k-Means with Outliers}},
  booktitle =	{49th International Symposium on Mathematical Foundations of Computer Science (MFCS 2024)},
  pages =	{84:1--84:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-335-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{306},
  editor =	{Kr\'{a}lovi\v{c}, Rastislav and Ku\v{c}era, Anton{\'\i}n},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2024.84},
  URN =		{urn:nbn:de:0030-drops-206408},
  doi =		{10.4230/LIPIcs.MFCS.2024.84},
  annote =	{Keywords: Approximation algorithms, clustering}
}
Document
Track A: Algorithms, Complexity and Games
Fully-Scalable MPC Algorithms for Clustering in High Dimension

Authors: Artur Czumaj, Guichen Gao, Shaofeng H.-C. Jiang, Robert Krauthgamer, and Pavel Veselý

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
We design new parallel algorithms for clustering in high-dimensional Euclidean spaces. These algorithms run in the Massively Parallel Computation (MPC) model, and are fully scalable, meaning that the local memory in each machine may be n^σ for arbitrarily small fixed σ > 0. Importantly, the local memory may be substantially smaller than the number of clusters k, yet all our algorithms are fast, i.e., run in O(1) rounds. We first devise a fast MPC algorithm for O(1)-approximation of uniform Facility Location. This is the first fully-scalable MPC algorithm that achieves O(1)-approximation for any clustering problem in general geometric setting; previous algorithms only provide poly(log n)-approximation or apply to restricted inputs, like low dimension or small number of clusters k; e.g. [Bhaskara and Wijewardena, ICML'18; Cohen-Addad et al., NeurIPS'21; Cohen-Addad et al., ICML'22]. We then build on this Facility Location result and devise a fast MPC algorithm that achieves O(1)-bicriteria approximation for k-Median and for k-Means, namely, it computes (1+ε)k clusters of cost within O(1/ε²)-factor of the optimum for k clusters. A primary technical tool that we introduce, and may be of independent interest, is a new MPC primitive for geometric aggregation, namely, computing for every data point a statistic of its approximate neighborhood, for statistics like range counting and nearest-neighbor search. Our implementation of this primitive works in high dimension, and is based on consistent hashing (aka sparse partition), a technique that was recently used for streaming algorithms [Czumaj et al., FOCS'22].

Cite as

Artur Czumaj, Guichen Gao, Shaofeng H.-C. Jiang, Robert Krauthgamer, and Pavel Veselý. Fully-Scalable MPC Algorithms for Clustering in High Dimension. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 50:1-50:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{czumaj_et_al:LIPIcs.ICALP.2024.50,
  author =	{Czumaj, Artur and Gao, Guichen and Jiang, Shaofeng H.-C. and Krauthgamer, Robert and Vesel\'{y}, Pavel},
  title =	{{Fully-Scalable MPC Algorithms for Clustering in High Dimension}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{50:1--50:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.50},
  URN =		{urn:nbn:de:0030-drops-201938},
  doi =		{10.4230/LIPIcs.ICALP.2024.50},
  annote =	{Keywords: Massively parallel computing, high dimension, facility location, k-median, k-means}
}
Document
Track A: Algorithms, Complexity and Games
One-Pass Additive-Error Subset Selection for 𝓁_p Subspace Approximation

Authors: Amit Deshpande and Rameshwar Pratap

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
We consider the problem of subset selection for 𝓁_p subspace approximation, that is, to efficiently find a small subset of data points such that solving the problem optimally for this subset gives a good approximation to solving the problem optimally for the original input. Previously known subset selection algorithms based on volume sampling and adaptive sampling [Deshpande and Varadarajan, 2007], for the general case of p ∈ [1, ∞), require multiple passes over the data. In this paper, we give a one-pass subset selection with an additive approximation guarantee for 𝓁_p subspace approximation, for any p ∈ [1, ∞). Earlier subset selection algorithms that give a one-pass multiplicative (1+ε) approximation work under the special cases. Cohen et al. [Michael B. Cohen et al., 2017] gives a one-pass subset section that offers multiplicative (1+ε) approximation guarantee for the special case of 𝓁₂ subspace approximation. Mahabadi et al. [Sepideh Mahabadi et al., 2020] gives a one-pass noisy subset selection with (1+ε) approximation guarantee for 𝓁_p subspace approximation when p ∈ {1, 2}. Our subset selection algorithm gives a weaker, additive approximation guarantee, but it works for any p ∈ [1, ∞).

Cite as

Amit Deshpande and Rameshwar Pratap. One-Pass Additive-Error Subset Selection for 𝓁_p Subspace Approximation. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 51:1-51:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{deshpande_et_al:LIPIcs.ICALP.2022.51,
  author =	{Deshpande, Amit and Pratap, Rameshwar},
  title =	{{One-Pass Additive-Error Subset Selection for 𝓁\underlinep Subspace Approximation}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{51:1--51:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.51},
  URN =		{urn:nbn:de:0030-drops-163924},
  doi =		{10.4230/LIPIcs.ICALP.2022.51},
  annote =	{Keywords: Subspace approximation, streaming algorithms, low-rank approximation, adaptive sampling, volume sampling, subset selection}
}
Document
On the Complexity of Constrained Determinantal Point Processes

Authors: L. Elisa Celis, Amit Deshpande, Tarun Kathuria, Damian Straszak, and Nisheeth K. Vishnoi

Published in: LIPIcs, Volume 81, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017)


Abstract
Determinantal Point Processes (DPPs) are probabilistic models that arise in quantum physics and random matrix theory and have recently found numerous applications in theoretical computer science and machine learning. DPPs define probability distributions over subsets of a given ground set, they exhibit interesting properties such as negative correlation, and, unlike other models of negative correlation such as Markov random fields, have efficient algorithms for sampling. When applied to kernel methods in machine learning, DPPs favor subsets of the given data with more diverse features. However, many real-world applications require efficient algorithms to sample from DPPs with additional constraints on the sampled subset, e.g., partition or matroid constraints that are important from the viewpoint of ensuring priors, resource or fairness constraints on the sampled subset. Whether one can efficiently sample from DPPs in such constrained settings is an important problem that was first raised in a survey of DPPs for machine learning by Kulesza and Taskar and studied in some recent works. The main contribution of this paper is the first resolution of the complexity of sampling from DPPs with constraints. On the one hand, we give exact efficient algorithms for sampling from constrained DPPs when the description of the constraints is in unary; this includes special cases of practical importance such as a small number of partition, knapsack or budget constraints. On the other hand, we prove that when the constraints are specified in binary, this problem is #P-hard via a reduction from the problem of computing mixed discriminants; implying that it may be unlikely that there is an FPRAS. Technically, our algorithmic result benefits from viewing the constrained sampling problem via the lens of polynomials and we obtain our complexity results by providing an equivalence between computing mixed discriminants and sampling from partition constrained DPPs. As a consequence, we obtain a few corollaries of independent interest: 1) An algorithm to count, sample (and, hence, optimize) over the base polytope of regular matroids when there are additional (succinct) budget constraints and, 2) An algorithm to evaluate and compute mixed characteristic polynomials, that played a central role in the resolution of the Kadison-Singer problem, for certain special cases.

Cite as

L. Elisa Celis, Amit Deshpande, Tarun Kathuria, Damian Straszak, and Nisheeth K. Vishnoi. On the Complexity of Constrained Determinantal Point Processes. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 81, pp. 36:1-36:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{celis_et_al:LIPIcs.APPROX-RANDOM.2017.36,
  author =	{Celis, L. Elisa and Deshpande, Amit and Kathuria, Tarun and Straszak, Damian and Vishnoi, Nisheeth K.},
  title =	{{On the Complexity of Constrained Determinantal Point Processes}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017)},
  pages =	{36:1--36:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-044-6},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{81},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} D. P. and Williamson, David P. and Vempala, Santosh S.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2017.36},
  URN =		{urn:nbn:de:0030-drops-75851},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2017.36},
  annote =	{Keywords: determinantal point processes, constraints, matroids, sampling and counting, polynomials, mixed discriminant}
}
Document
Embedding Approximately Low-Dimensional l_2^2 Metrics into l_1

Authors: Amit Deshpande, Prahladh Harsha, and Rakesh Venkat

Published in: LIPIcs, Volume 65, 36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016)


Abstract
Goemans showed that any n points x_1,..., x_n in d-dimensions satisfying l_2^2 triangle inequalities can be embedded into l_{1}, with worst-case distortion at most sqrt{d}. We consider an extension of this theorem to the case when the points are approximately low-dimensional as opposed to exactly low-dimensional, and prove the following analogous theorem, albeit with average distortion guarantees: There exists an l_{2}^{2}-to-l_{1} embedding with average distortion at most the stable rank, sr(M), of the matrix M consisting of columns {x_i-x_j}_{i<j}. Average distortion embedding suffices for applications such as the SPARSEST CUT problem. Our embedding gives an approximation algorithm for the SPARSEST CUT problem on low threshold-rank graphs, where earlier work was inspired by Lasserre SDP hierarchy, and improves on a previous result of the first and third author [Deshpande and Venkat, in Proc. 17th APPROX, 2014]. Our ideas give a new perspective on l_{2}^{2} metric, an alternate proof of Goemans' theorem, and a simpler proof for average distortion sqrt{d}.

Cite as

Amit Deshpande, Prahladh Harsha, and Rakesh Venkat. Embedding Approximately Low-Dimensional l_2^2 Metrics into l_1. In 36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 65, pp. 10:1-10:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{deshpande_et_al:LIPIcs.FSTTCS.2016.10,
  author =	{Deshpande, Amit and Harsha, Prahladh and Venkat, Rakesh},
  title =	{{Embedding Approximately Low-Dimensional l\underline2^2 Metrics into l\underline1}},
  booktitle =	{36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016)},
  pages =	{10:1--10:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-027-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{65},
  editor =	{Lal, Akash and Akshay, S. and Saurabh, Saket and Sen, Sandeep},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2016.10},
  URN =		{urn:nbn:de:0030-drops-68456},
  doi =		{10.4230/LIPIcs.FSTTCS.2016.10},
  annote =	{Keywords: Metric Embeddings, Sparsest Cut, Negative type metrics, Approximation Algorithms}
}
Document
Guruswami-Sinop Rounding without Higher Level Lasserre

Authors: Amit Deshpande and Rakesh Venkat

Published in: LIPIcs, Volume 28, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)


Abstract
Guruswami and Sinop give a O(1/delta) approximation guarantee for the non-uniform Sparsest Cut problem by solving O(r)-level Lasserre semidefinite constraints, provided that the generalized eigenvalues of the Laplacians of the cost and demand graphs satisfy a certain spectral condition, namely, the (r+1)-th generalized eigenvalue is at least OPT/(1-delta). Their key idea is a rounding technique that first maps a vector-valued solution to [0,1] using appropriately scaled projections onto Lasserre vectors. In this paper, we show that similar projections and analysis can be obtained using only l_2^2 triangle inequality constraints. This results in a O(r/delta^2) approximation guarantee for the non-uniform Sparsest Cut problem by adding only l_2^2 triangle inequality constraints to the usual semidefinite program, provided that the same spectral condition, the (r+1)-th generalized eigenvalue is at least OPT/(1-delta), holds.

Cite as

Amit Deshpande and Rakesh Venkat. Guruswami-Sinop Rounding without Higher Level Lasserre. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014). Leibniz International Proceedings in Informatics (LIPIcs), Volume 28, pp. 105-114, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{deshpande_et_al:LIPIcs.APPROX-RANDOM.2014.105,
  author =	{Deshpande, Amit and Venkat, Rakesh},
  title =	{{Guruswami-Sinop Rounding without Higher Level Lasserre}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2014)},
  pages =	{105--114},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-74-3},
  ISSN =	{1868-8969},
  year =	{2014},
  volume =	{28},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} and Devanur, Nikhil R. and Moore, Cristopher},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2014.105},
  URN =		{urn:nbn:de:0030-drops-46911},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2014.105},
  annote =	{Keywords: Sparsest Cut, Lasserre Hierarchy, Metric embeddings}
}
  • Refine by Author
  • 4 Deshpande, Amit
  • 2 Venkat, Rakesh
  • 1 Alemán Espinosa, David
  • 1 Celis, L. Elisa
  • 1 Czumaj, Artur
  • Show More...

  • Refine by Classification
  • 2 Theory of computation → Facility location and clustering
  • 1 Computing methodologies → Dimensionality reduction and manifold learning
  • 1 Mathematics of computing → Dimensionality reduction
  • 1 Mathematics of computing → Discrete optimization
  • 1 Theory of computation → Abstract machines
  • Show More...

  • Refine by Keyword
  • 2 Approximation algorithms
  • 2 Sparsest Cut
  • 1 Adaptivity gap
  • 1 Approximation Algorithms
  • 1 Instance Optimality
  • Show More...

  • Refine by Type
  • 8 document

  • Refine by Publication Year
  • 4 2024
  • 1 2014
  • 1 2016
  • 1 2017
  • 1 2022

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail