4 Search Results for "Charikar, Moses S."


Document
Invited Paper
Reconstructing the Tree of Life (Fitting Distances by Tree Metrics) (Invited Paper)

Authors: Mikkel Thorup

Published in: LIPIcs, Volume 227, 18th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2022)


Abstract
We consider the numerical taxonomy problem of fitting an S× S distance matrix D with a tree metric T. Here T is a weighted tree spanning S where the path lengths in T induce a metric on S. If there is a tree metric matching D exactly, then it is easily found. If there is no exact match, then for some k, we want to minimize the L_k norm of the errors, that is, pick T so as to minimize ‖D-T‖_k = (∑_{i,j ∈ S} |D(i,j)-T(i,j)|^k) ^{1/k}. This problem was raised in biology in the 1960s for k = 1,2. The biological interpretation is that T represents a possible evolution behind the species in S matching some measured distances in D. Sometimes, it is required that T is an ultrametric, meaning that all species are at the same distance from the root. An evolutionary tree induces a hierarchical classification of species and this is not just tied to biology. Medicine, ecology and linguistics are just some of the fields where this concept appears, and it is even an integral part of machine learning and data science. Fundamentally, if we can approximate distances with a tree, then they are much easier to reason about: many questions that are NP-hard for general metrics can be answered in linear time on tree metrics. In fact, humans have appreciated hierarchical classifications at least since Plato and Aristotle (350 BC). The numerical taxonomy problem is important in practice and many heuristics have been proposed. In this talk we will review the basic algorithmic theory, results and techniques, for the problem, including the most recent result from FOCS'21 [Vincent Cohen-Addad et al., 2021]. They paint a varied landscape with big differences between different moments, and with some very nice open problems remaining. - At STOC'93, Farach, Kannan, and Warnow [Martin Farach et al., 1995] proved that under L_∞, we can find the optimal ultrametric. Almost all other variants of the problem are APX-hard. - At SODA'96, Agarwala, Bafna, Farach, Paterson, and Thorup [Richa Agarwala et al., 1999] showed that for any norm L_k, k ≥ 1, if the best ultrametric can be α-approximated, then the best tree metric can be 3α-approximated. In particular, this implied a 3-approximation for tree metrics under L_∞. - At FOCS'05, Ailon and Charikar [Nir Ailon and Moses Charikar, 2011] showed that for any L_k, k ≥ 1, we can get an approximation factor of O(((log n)(log log n))^{1/k}) for both tree and ultrametrics. Their paper was focused on the L₁ norm, and they wrote "Determining whether an O(1) approximation can be obtained is a fascinating question". - At FOCS'21, Cohen-Addad, Das, Kipouridis, Parotsidis, and Thorup [Vincent Cohen-Addad et al., 2021] showed that indeed a constant factor is possible for L₁ for both tree and ultrametrics. This uses the special structure of L₁ in relation to hierarchies. - The status of L_k is wide open for 1 < k < ∞. All we know is that the approximation factor is between Ω(1) and O((log n)(log log n)).

Cite as

Mikkel Thorup. Reconstructing the Tree of Life (Fitting Distances by Tree Metrics) (Invited Paper). In 18th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 227, pp. 3:1-3:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{thorup:LIPIcs.SWAT.2022.3,
  author =	{Thorup, Mikkel},
  title =	{{Reconstructing the Tree of Life (Fitting Distances by Tree Metrics)}},
  booktitle =	{18th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2022)},
  pages =	{3:1--3:2},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-236-5},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{227},
  editor =	{Czumaj, Artur and Xin, Qin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SWAT.2022.3},
  URN =		{urn:nbn:de:0030-drops-161631},
  doi =		{10.4230/LIPIcs.SWAT.2022.3},
  annote =	{Keywords: Numerical taxonomy, computational phylogenetics, hierarchical clustering}
}
Document
Mildly Exponential Time Approximation Algorithms for Vertex Cover, Balanced Separator and Uniform Sparsest Cut

Authors: Pasin Manurangsi and Luca Trevisan

Published in: LIPIcs, Volume 116, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)


Abstract
In this work, we study the trade-off between the running time of approximation algorithms and their approximation guarantees. By leveraging a structure of the "hard" instances of the Arora-Rao-Vazirani lemma [Sanjeev Arora et al., 2009; James R. Lee, 2005], we show that the Sum-of-Squares hierarchy can be adapted to provide "fast", but still exponential time, approximation algorithms for several problems in the regime where they are believed to be NP-hard. Specifically, our framework yields the following algorithms; here n denote the number of vertices of the graph and r can be any positive real number greater than 1 (possibly depending on n). - A (2 - 1/(O(r)))-approximation algorithm for Vertex Cover that runs in exp (n/(2^{r^2)})n^{O(1)} time. - An O(r)-approximation algorithms for Uniform Sparsest Cut and Balanced Separator that runs in exp (n/(2^{r^2)})n^{O(1)} time. Our algorithm for Vertex Cover improves upon Bansal et al.'s algorithm [Nikhil Bansal et al., 2017] which achieves (2 - 1/(O(r)))-approximation in time exp (n/(r^r))n^{O(1)}. For Uniform Sparsest Cut and Balanced Separator, our algorithms improve upon O(r)-approximation exp (n/(2^r))n^{O(1)}-time algorithms that follow from a work of Charikar et al. [Moses Charikar et al., 2010].

Cite as

Pasin Manurangsi and Luca Trevisan. Mildly Exponential Time Approximation Algorithms for Vertex Cover, Balanced Separator and Uniform Sparsest Cut. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 116, pp. 20:1-20:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{manurangsi_et_al:LIPIcs.APPROX-RANDOM.2018.20,
  author =	{Manurangsi, Pasin and Trevisan, Luca},
  title =	{{Mildly Exponential Time Approximation Algorithms for Vertex Cover, Balanced Separator and Uniform Sparsest Cut}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)},
  pages =	{20:1--20:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-085-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{116},
  editor =	{Blais, Eric and Jansen, Klaus and D. P. Rolim, Jos\'{e} and Steurer, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2018.20},
  URN =		{urn:nbn:de:0030-drops-94241},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2018.20},
  annote =	{Keywords: Approximation algorithms, Exponential-time algorithms, Vertex Cover, Sparsest Cut, Balanced Separator}
}
Document
Min-Cost Bipartite Perfect Matching with Delays

Authors: Itai Ashlagi, Yossi Azar, Moses Charikar, Ashish Chiplunkar, Ofir Geri, Haim Kaplan, Rahul Makhijani, Yuyi Wang, and Roger Wattenhofer

Published in: LIPIcs, Volume 81, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017)


Abstract
In the min-cost bipartite perfect matching with delays (MBPMD) problem, requests arrive online at points of a finite metric space. Each request is either positive or negative and has to be matched to a request of opposite polarity. As opposed to traditional online matching problems, the algorithm does not have to serve requests as they arrive, and may choose to match them later at a cost. Our objective is to minimize the sum of the distances between matched pairs of requests (the connection cost) and the sum of the waiting times of the requests (the delay cost). This objective exhibits a natural tradeoff between minimizing the distances and the cost of waiting for better matches. This tradeoff appears in many real-life scenarios, notably, ride-sharing platforms. MBPMD is related to its non-bipartite variant, min-cost perfect matching with delays (MPMD), in which each request can be matched to any other request. MPMD was introduced by Emek et al. (STOC'16), who showed an O(log^2(n)+log(Delta))-competitive randomized algorithm on n-point metric spaces with aspect ratio Delta. Our contribution is threefold. First, we present a new lower bound construction for MPMD and MBPMD. We get a lower bound of Omega(sqrt(log(n)/log(log(n)))) on the competitive ratio of any randomized algorithm for MBPMD. For MPMD, we improve the lower bound from Omega(sqrt(log(n))) (shown by Azar et al., SODA'17) to Omega(log(n)/log(log(n))), thus, almost matching their upper bound of O(log(n)). Second, we adapt the algorithm of Emek et al. to the bipartite case, and provide a simplified analysis that improves the competitive ratio to O(log(n)). The key ingredient of the algorithm is an O(h)-competitive randomized algorithm for MBPMD on weighted trees of height h. Third, we provide an O(h)-competitive deterministic algorithm for MBPMD on weighted trees of height h. This algorithm is obtained by adapting the algorithm for MPMD by Azar et al. to the apparently more complicated bipartite setting.

Cite as

Itai Ashlagi, Yossi Azar, Moses Charikar, Ashish Chiplunkar, Ofir Geri, Haim Kaplan, Rahul Makhijani, Yuyi Wang, and Roger Wattenhofer. Min-Cost Bipartite Perfect Matching with Delays. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 81, pp. 1:1-1:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{ashlagi_et_al:LIPIcs.APPROX-RANDOM.2017.1,
  author =	{Ashlagi, Itai and Azar, Yossi and Charikar, Moses and Chiplunkar, Ashish and Geri, Ofir and Kaplan, Haim and Makhijani, Rahul and Wang, Yuyi and Wattenhofer, Roger},
  title =	{{Min-Cost Bipartite Perfect Matching with Delays}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017)},
  pages =	{1:1--1:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-044-6},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{81},
  editor =	{Jansen, Klaus and Rolim, Jos\'{e} D. P. and Williamson, David P. and Vempala, Santosh S.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2017.1},
  URN =		{urn:nbn:de:0030-drops-75509},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2017.1},
  annote =	{Keywords: online algorithms with delayed service, bipartite matching, competitive analysis}
}
Document
Invited Talk
Bypassing Worst Case Analysis: Tensor Decomposition and Clustering (Invited Talk)

Authors: Moses S. Charikar

Published in: LIPIcs, Volume 45, 35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015)


Abstract
Typical worst case analysis of algorithms has led to a rich theory, but suffers from many pitfalls. This has inspired several approaches to bypass worst case analysis. In this talk, I will describe two vignettes from recent work in this realm. In the first part of the talk, I will discuss tensor decomposition -- a natural higher dimensional analog of matrix decomposition. Low rank tensor decompositions have proved to be a powerful tool for learning generative models, and uniqueness results give them a significant advantage over matrix decomposition methods. Yet, they pose significant challenges for algorithm design as most problems about tensors are NP-hard. I will discuss a smoothed analysis framework for analyzing algorithms for tensor decomposition which models realistic instances of learning problems and allows us to overcome many of the usual limitations of using tensor methods. In the second part of the talk, I will explore the phenomenon of convex relaxations returning integer solutions. Clearly this is not true in the worst case. I will discuss instances of discrete optimization problems where, for a suitable distribution on inputs, LP and SDP relaxations produce integer solutions with high probability. This has been studied in the context of LP decoding, sparse recovery, stochastic block models and so on. I will mention some recent results for clustering problems: when points are drawn from a distribution over k sufficiently separated clusters, the well known k-median relaxation and a (not so well known) SDP relaxation for k-means exactly recover the clusters.

Cite as

Moses S. Charikar. Bypassing Worst Case Analysis: Tensor Decomposition and Clustering (Invited Talk). In 35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 45, p. 1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{charikar:LIPIcs.FSTTCS.2015.1,
  author =	{Charikar, Moses S.},
  title =	{{Bypassing Worst Case Analysis: Tensor Decomposition and Clustering}},
  booktitle =	{35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015)},
  pages =	{1--1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-97-2},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{45},
  editor =	{Harsha, Prahladh and Ramalingam, G.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2015.1},
  URN =		{urn:nbn:de:0030-drops-56647},
  doi =		{10.4230/LIPIcs.FSTTCS.2015.1},
  annote =	{Keywords: tensor decomposition, smoothed analysis, convex relaxations, integrality}
}
  • Refine by Author
  • 1 Ashlagi, Itai
  • 1 Azar, Yossi
  • 1 Charikar, Moses
  • 1 Charikar, Moses S.
  • 1 Chiplunkar, Ashish
  • Show More...

  • Refine by Classification
  • 2 Theory of computation → Approximation algorithms analysis
  • 1 Theory of computation → Facility location and clustering

  • Refine by Keyword
  • 1 Approximation algorithms
  • 1 Balanced Separator
  • 1 Exponential-time algorithms
  • 1 Numerical taxonomy
  • 1 Sparsest Cut
  • Show More...

  • Refine by Type
  • 4 document

  • Refine by Publication Year
  • 1 2015
  • 1 2017
  • 1 2018
  • 1 2022

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail