13 Search Results for "Venkatasubramanian, Suresh"


Document
Polynomial-Time Constant-Approximation for Fair Sum-Of-Radii Clustering

Authors: Sina Bagheri Nezhad, Sayan Bandyapadhyay, and Tianzhi Chen

Published in: LIPIcs, Volume 351, 33rd Annual European Symposium on Algorithms (ESA 2025)


Abstract
In a seminal work, Chierichetti et al. [Chierichetti et al., 2017] introduced the (t,k)-fair clustering problem: Given a set of red points and a set of blue points in a metric space, a clustering is called fair if the number of red points in each cluster is at most t times and at least 1/t times the number of blue points in that cluster. The goal is to compute a fair clustering with at most k clusters that optimizes certain objective function. Considering this problem, they designed a polynomial-time O(1)- and O(t)-approximation for the k-center and the k-median objective, respectively. Recently, Carta et al. [Carta et al., 2024] studied this problem with the sum-of-radii objective and obtained a (6+ε)-approximation with running time O((k log_{1+ε}(k/ε))^k n^O(1)), i.e., fixed-parameter tractable in k. Here n is the input size. In this work, we design the first polynomial-time O(1)-approximation for (t,k)-fair clustering with the sum-of-radii objective, improving the result of Carta et al. Our result places sum-of-radii in the same group of objectives as k-center, that admit polynomial-time O(1)-approximations. This result also implies a polynomial-time O(1)-approximation for the Euclidean version of the problem, for which an f(k)⋅n^O(1)-time (1+ε)-approximation was known due to Drexler et al. [Drexler et al., 2023]. Here f is an exponential function of k. We are also able to extend our result to any arbitrary 𝓁 ≥ 2 number of colors when t = 1. This matches known results for the k-center and k-median objectives in this case. The significant disparity of sum-of-radii compared to k-center and k-median presents several complex challenges, all of which we successfully overcome in our work. Our main contribution is a novel cluster-merging-based analysis technique for sum-of-radii that helps us achieve the constant-approximation bounds.

Cite as

Sina Bagheri Nezhad, Sayan Bandyapadhyay, and Tianzhi Chen. Polynomial-Time Constant-Approximation for Fair Sum-Of-Radii Clustering. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 62:1-62:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bagherinezhad_et_al:LIPIcs.ESA.2025.62,
  author =	{Bagheri Nezhad, Sina and Bandyapadhyay, Sayan and Chen, Tianzhi},
  title =	{{Polynomial-Time Constant-Approximation for Fair Sum-Of-Radii Clustering}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{62:1--62:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.62},
  URN =		{urn:nbn:de:0030-drops-245309},
  doi =		{10.4230/LIPIcs.ESA.2025.62},
  annote =	{Keywords: fair clustering, sum-of-radii clustering, approximation algorithms}
}
Document
Buffered Partially-Persistent External-Memory Search Trees

Authors: Gerth Stølting Brodal, Casper Moldrup Rysgaard, and Rolf Svenning

Published in: LIPIcs, Volume 351, 33rd Annual European Symposium on Algorithms (ESA 2025)


Abstract
We present an optimal partially-persistent external-memory search tree with amortized I/O bounds matching those achieved by the non-persistent B^{ε}-tree by Brodal and Fagerberg [SODA 2003]. In a partially-persistent data structure, each update creates a new version. All past versions can be queried, but only the current version can be updated. Operations should be efficient with respect to the size N_v of the accessed version v. For any parameter 0 < ε < 1, our data structure supports insertions and deletions in amortized 𝒪(1/(ε B^{1 - ε}) log_B N_v) I/Os, where B is the external-memory block size. It also supports successor and range reporting queries in amortized 𝒪(1/ε log_B N_v + K/B) I/Os, where K is the number of keys reported. The space usage of the data structure is linear in the total number of updates. We make the standard and minimal assumption that the internal memory has size M ≥ 2B. The previous state-of-the-art external-memory partially-persistent search tree by Arge, Danner and Teh [JEA 2003] supports all operations in worst-case 𝒪(log_B N_v + K/B) I/Os, matching the bounds achieved by the classical B-tree by Bayer and McCreight [Acta Informatica 1972]. Our data structure successfully combines buffering updates with partial persistence. The I/O bounds can also be achieved in the worst-case sense, by slightly modifying our data structure and under the requirement that the memory size M = Ω(B^{1-ε} log₂(max_v N_v)). For updates, where the I/O bound is o(1), we assume that the I/Os are performed evenly spread out among the updates (by performing buffer-overflows incrementally). The worst-case result slightly improves the memory requirement over the previous ephemeral external-memory dictionary by Das, Iacono, and Nekrich (ISAAC 2022), who achieved matching worst-case I/O bounds but required M = Ω(B log_B N), where N is the size of the current dictionary.

Cite as

Gerth Stølting Brodal, Casper Moldrup Rysgaard, and Rolf Svenning. Buffered Partially-Persistent External-Memory Search Trees. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 82:1-82:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{brodal_et_al:LIPIcs.ESA.2025.82,
  author =	{Brodal, Gerth St{\o}lting and Rysgaard, Casper Moldrup and Svenning, Rolf},
  title =	{{Buffered Partially-Persistent External-Memory Search Trees}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{82:1--82:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.82},
  URN =		{urn:nbn:de:0030-drops-245507},
  doi =		{10.4230/LIPIcs.ESA.2025.82},
  annote =	{Keywords: B-tree, buffered updates, partial persistence, external memory}
}
Document
Fast Kd-Trees for the Kullback-Leibler Divergence and Other Decomposable Bregman Divergences

Authors: Tuyen Pham and Hubert Wagner

Published in: LIPIcs, Volume 349, 19th International Symposium on Algorithms and Data Structures (WADS 2025)


Abstract
The contributions of the paper span theoretical and implementational results. First, we prove that Kd-trees can be extended to ℝ^d with the distance measured by an arbitrary Bregman divergence. Perhaps surprisingly, this shows that the triangle inequality is not necessary for correct pruning in Kd-trees. Second, we offer an efficient algorithm and C++ implementation for nearest neighbour search for decomposable Bregman divergences. The implementation supports the Kullback-Leibler divergence (relative entropy) which is a popular distance between probability vectors and is commonly used in statistics and machine learning. This is a step toward broadening the usage of computational geometry algorithms. Our benchmarks show that our implementation efficiently handles both exact and approximate nearest neighbour queries. Compared to a linear search, we achieve two orders of magnitude speedup for practical scenarios in dimension up to 100. Our solution is simpler and more efficient than competing methods.

Cite as

Tuyen Pham and Hubert Wagner. Fast Kd-Trees for the Kullback-Leibler Divergence and Other Decomposable Bregman Divergences. In 19th International Symposium on Algorithms and Data Structures (WADS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 349, pp. 45:1-45:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{pham_et_al:LIPIcs.WADS.2025.45,
  author =	{Pham, Tuyen and Wagner, Hubert},
  title =	{{Fast Kd-Trees for the Kullback-Leibler Divergence and Other Decomposable Bregman Divergences}},
  booktitle =	{19th International Symposium on Algorithms and Data Structures (WADS 2025)},
  pages =	{45:1--45:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-398-0},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{349},
  editor =	{Morin, Pat and Oh, Eunjin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WADS.2025.45},
  URN =		{urn:nbn:de:0030-drops-242766},
  doi =		{10.4230/LIPIcs.WADS.2025.45},
  annote =	{Keywords: Kd-tree, k-d tree, nearest neighbour search, Bregman divergence, decomposable Bregman divergence, KL divergence, relative entropy, cross entropy, Shannon’s entropy}
}
Document
Track A: Algorithms, Complexity and Games
Faster Fréchet Distance Under Transformations

Authors: Kevin Buchin, Maike Buchin, Zijin Huang, André Nusser, and Sampson Wong

Published in: LIPIcs, Volume 334, 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)


Abstract
We study the problem of computing the Fréchet distance between two polygonal curves under transformations. First, we consider translations in the Euclidean plane. Given two curves π and σ of total complexity n and a threshold δ ≥ 0, we present an 𝒪̃(n^{7 + 1/3}) time algorithm to determine whether there exists a translation t ∈ ℝ² such that the Fréchet distance between π and σ + t is at most δ. This improves on the previous best result, which is an 𝒪(n⁸) time algorithm. We then generalize this result to any class of rationally parameterized transformations, which includes translation, rotation, scaling, and arbitrary affine transformations. For a class T of rationally parametrized transformations with k degrees of freedom, we show that one can determine whether there is a transformation τ ∈ T such that the Fréchet distance between π and τ(σ) is at most δ in 𝒪̃(n^{3k+4/3}) time.

Cite as

Kevin Buchin, Maike Buchin, Zijin Huang, André Nusser, and Sampson Wong. Faster Fréchet Distance Under Transformations. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 36:1-36:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{buchin_et_al:LIPIcs.ICALP.2025.36,
  author =	{Buchin, Kevin and Buchin, Maike and Huang, Zijin and Nusser, Andr\'{e} and Wong, Sampson},
  title =	{{Faster Fr\'{e}chet Distance Under Transformations}},
  booktitle =	{52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
  pages =	{36:1--36:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-372-0},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{334},
  editor =	{Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.36},
  URN =		{urn:nbn:de:0030-drops-234137},
  doi =		{10.4230/LIPIcs.ICALP.2025.36},
  annote =	{Keywords: Fr\'{e}chet distance, curve similarity, shape matching}
}
Document
Transforming Dogs on the Line: On the Fréchet Distance Under Translation or Scaling in 1D

Authors: Lotte Blank, Jacobus Conradi, Anne Driemel, Benedikt Kolbe, André Nusser, and Marena Richter

Published in: LIPIcs, Volume 332, 41st International Symposium on Computational Geometry (SoCG 2025)


Abstract
The Fréchet distance is a computational mainstay for comparing polygonal curves. The Fréchet distance under translation, which is a translation invariant version, considers the similarity of two curves independent of their location in space. It is defined as the minimum Fréchet distance that arises from allowing arbitrary translations of the input curves. This problem and numerous variants of the Fréchet distance under some transformations have been studied, with more work concentrating on the discrete Fréchet distance, leaving a significant gap between the discrete and continuous versions of the Fréchet distance under transformations. Our contribution is twofold: First, we present an algorithm for the Fréchet distance under translation on 1-dimensional curves of complexity n with a running time of 𝒪(n^{8/3} log³ n). To achieve this, we develop a novel framework for the problem for 1-dimensional curves, which also applies to other scenarios and leads to our second contribution. We present an algorithm with the same running time of 𝒪(n^{8/3} log³ n) for the Fréchet distance under scaling for 1-dimensional curves. For both algorithms we match the running times of the discrete case and improve the previously best known bounds of 𝒪̃(n⁴). Our algorithms rely on technical insights but are conceptually simple, essentially reducing the continuous problem to the discrete case across different length scales.

Cite as

Lotte Blank, Jacobus Conradi, Anne Driemel, Benedikt Kolbe, André Nusser, and Marena Richter. Transforming Dogs on the Line: On the Fréchet Distance Under Translation or Scaling in 1D. In 41st International Symposium on Computational Geometry (SoCG 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 332, pp. 22:1-22:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{blank_et_al:LIPIcs.SoCG.2025.22,
  author =	{Blank, Lotte and Conradi, Jacobus and Driemel, Anne and Kolbe, Benedikt and Nusser, Andr\'{e} and Richter, Marena},
  title =	{{Transforming Dogs on the Line: On the Fr\'{e}chet Distance Under Translation or Scaling in 1D}},
  booktitle =	{41st International Symposium on Computational Geometry (SoCG 2025)},
  pages =	{22:1--22:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-370-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{332},
  editor =	{Aichholzer, Oswin and Wang, Haitao},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2025.22},
  URN =		{urn:nbn:de:0030-drops-231746},
  doi =		{10.4230/LIPIcs.SoCG.2025.22},
  annote =	{Keywords: Fr\'{e}chet distance under translation, Fr\'{e}chet distance under scaling, time series, shape matching}
}
Document
Kernel Multiaccuracy

Authors: Carol Xuan Long, Wael Alghamdi, Alexander Glynn, Yixuan Wu, and Flavio P. Calmon

Published in: LIPIcs, Volume 329, 6th Symposium on Foundations of Responsible Computing (FORC 2025)


Abstract
Predefined demographic groups often overlook the subpopulations most impacted by model errors, leading to a growing emphasis on data-driven methods that pinpoint where models underperform. The emerging field of multi-group fairness addresses this by ensuring models perform well across a wide range of group-defining functions, rather than relying on fixed demographic categories. We demonstrate that recently introduced notions of multi-group fairness can be equivalently formulated as integral probability metrics (IPM). IPMs are the common information-theoretic tool that underlie definitions such as multiaccuracy, multicalibration, and outcome indistinguishably. For multiaccuracy, this connection leads to a simple, yet powerful procedure for achieving multiaccuracy with respect to an infinite-dimensional class of functions defined by a reproducing kernel Hilbert space (RKHS): first perform a kernel regression of a model’s errors, then subtract the resulting function from a model’s predictions. We combine these results to develop a post-processing method that improves multiaccuracy with respect to bounded-norm functions in an RKHS, enjoys provable performance guarantees, and, in binary classification benchmarks, achieves favorable multiaccuracy relative to competing methods.

Cite as

Carol Xuan Long, Wael Alghamdi, Alexander Glynn, Yixuan Wu, and Flavio P. Calmon. Kernel Multiaccuracy. In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 7:1-7:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{long_et_al:LIPIcs.FORC.2025.7,
  author =	{Long, Carol Xuan and Alghamdi, Wael and Glynn, Alexander and Wu, Yixuan and Calmon, Flavio P.},
  title =	{{Kernel Multiaccuracy}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{7:1--7:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.7},
  URN =		{urn:nbn:de:0030-drops-231341},
  doi =		{10.4230/LIPIcs.FORC.2025.7},
  annote =	{Keywords: algorithmic fairness, integral probability metrics, information theory}
}
Document
Repairing Databases over Metric Spaces with Coincidence Constraints

Authors: Youri Kaminsky, Benny Kimelfeld, Ester Livshits, Felix Naumann, and David Wajc

Published in: LIPIcs, Volume 328, 28th International Conference on Database Theory (ICDT 2025)


Abstract
Datasets often contain values that naturally reside in a metric space: numbers, strings, geographical locations, machine-learned embeddings in a vector space, and so on. We study the computational complexity of repairing inconsistent databases that violate integrity constraints, where the database values belong to an underlying metric space. The goal is to update the database values to retain consistency while minimizing the total distance between the original values and the repaired ones. We consider what we refer to as coincidence constraints, which include unary key constraints, inclusion constraints, foreign keys, and generally any restriction on the relationship between the numbers of cells of different labels (attributes) coinciding in a single value, for a fixed attribute set. We begin by showing that the problem is APX-hard for general metric spaces. We then present an algorithm solving the problem optimally for tree metrics, which generalize both the line metric (i.e., where repaired values are numbers) and the discrete metric (i.e., where we simply count the number of changed values). Combining our algorithm for tree metrics and a classic result on probabilistic tree embeddings, we design a (high probability) logarithmic-ratio approximation for general metrics. We also study the variant of the problem where we limit the allowed change of each individual value. In this variant, it is already NP-complete to decide the existence of any legal repair for a general metric, and we present a polynomial-time repairing algorithm for the case of a line metric.

Cite as

Youri Kaminsky, Benny Kimelfeld, Ester Livshits, Felix Naumann, and David Wajc. Repairing Databases over Metric Spaces with Coincidence Constraints. In 28th International Conference on Database Theory (ICDT 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 328, pp. 14:1-14:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{kaminsky_et_al:LIPIcs.ICDT.2025.14,
  author =	{Kaminsky, Youri and Kimelfeld, Benny and Livshits, Ester and Naumann, Felix and Wajc, David},
  title =	{{Repairing Databases over Metric Spaces with Coincidence Constraints}},
  booktitle =	{28th International Conference on Database Theory (ICDT 2025)},
  pages =	{14:1--14:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-364-5},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{328},
  editor =	{Roy, Sudeepa and Kara, Ahmet},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICDT.2025.14},
  URN =		{urn:nbn:de:0030-drops-229554},
  doi =		{10.4230/LIPIcs.ICDT.2025.14},
  annote =	{Keywords: Database repairs, metric spaces, coincidence constraints, inclusion constraints, foreign-key constraints}
}
Document
Online Disjoint Set Covers: Randomization Is Not Necessary

Authors: Marcin Bienkowski, Jarosław Byrka, and Łukasz Jeż

Published in: LIPIcs, Volume 327, 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)


Abstract
In the online disjoint set covers problem, the edges of a hypergraph are revealed online, and the goal is to partition them into a maximum number of disjoint set covers. That is, n nodes of a hypergraph are given at the beginning, and then a sequence of hyperedges (subsets of [n]) is presented to an algorithm. For each hyperedge, an online algorithm must assign a color (an integer). Once an input terminates, the gain of the algorithm is the number of colors that correspond to valid set covers (i.e., the union of hyperedges that have that color contains all n nodes). We present a deterministic online algorithm that is O(log² n)-competitive, exponentially improving on the previous bound of O(n) and matching the performance of the best randomized algorithm by Emek et al. [ESA 2019]. For color selection, our algorithm uses a novel potential function, which can be seen as an online counterpart of the derandomization method of conditional probabilities and pessimistic estimators. There are only a few cases where derandomization has been successfully used in the field of online algorithms. In contrast to previous approaches, our result extends to the following new challenges: (i) the potential function derandomizes not only the Chernoff bound, but also the coupon collector’s problem, (ii) the value of Opt of the maximization problem is not bounded a priori, and (iii) we do not produce a fractional solution first, but work directly on the input.

Cite as

Marcin Bienkowski, Jarosław Byrka, and Łukasz Jeż. Online Disjoint Set Covers: Randomization Is Not Necessary. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 18:1-18:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bienkowski_et_al:LIPIcs.STACS.2025.18,
  author =	{Bienkowski, Marcin and Byrka, Jaros{\l}aw and Je\.{z}, {\L}ukasz},
  title =	{{Online Disjoint Set Covers: Randomization Is Not Necessary}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{18:1--18:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.18},
  URN =		{urn:nbn:de:0030-drops-228433},
  doi =		{10.4230/LIPIcs.STACS.2025.18},
  annote =	{Keywords: Disjoint Set Covers, Derandomization, pessimistic Estimator, potential Function, online Algorithms, competitive Analysis}
}
Document
Sublinear Algorithms for MAXCUT and Correlation Clustering

Authors: Aditya Bhaskara, Samira Daruki, and Suresh Venkatasubramanian

Published in: LIPIcs, Volume 107, 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)


Abstract
We study sublinear algorithms for two fundamental graph problems, MAXCUT and correlation clustering. Our focus is on constructing core-sets as well as developing streaming algorithms for these problems. Constant space algorithms are known for dense graphs for these problems, while Omega(n) lower bounds exist (in the streaming setting) for sparse graphs. Our goal in this paper is to bridge the gap between these extremes. Our first result is to construct core-sets of size O~(n^{1-delta}) for both the problems, on graphs with average degree n^{delta} (for any delta >0). This turns out to be optimal, under the exponential time hypothesis (ETH). Our core-set analysis is based on studying random-induced sub-problems of optimization problems. To the best of our knowledge, all the known results in our parameter range rely crucially on near-regularity assumptions. We avoid these by using a biased sampling approach, which we analyze using recent results on concentration of quadratic functions. We then show that our construction yields a 2-pass streaming (1+epsilon)-approximation for both problems; the algorithm uses O~(n^{1-delta}) space, for graphs of average degree n^delta.

Cite as

Aditya Bhaskara, Samira Daruki, and Suresh Venkatasubramanian. Sublinear Algorithms for MAXCUT and Correlation Clustering. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 16:1-16:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{bhaskara_et_al:LIPIcs.ICALP.2018.16,
  author =	{Bhaskara, Aditya and Daruki, Samira and Venkatasubramanian, Suresh},
  title =	{{Sublinear Algorithms for MAXCUT and Correlation Clustering}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{16:1--16:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.16},
  URN =		{urn:nbn:de:0030-drops-90203},
  doi =		{10.4230/LIPIcs.ICALP.2018.16},
  annote =	{Keywords: Sublinear algorithms, Streaming algorithms, Core-sets, Maximum cut, Correlation clustering}
}
Document
Computational Philosophy: On Fairness in Automated Decision Making

Authors: Suresh Venkatasubramanian

Published in: LIPIcs, Volume 92, 28th International Symposium on Algorithms and Computation (ISAAC 2017)


Abstract
As more and more of our lives are taken over by automated decision making systems (whether it be for hiring, college admissions, criminal justice or loans), we have begun to ask whether these systems are making decisions that humans would consider fair, or non-discriminatory. The problem is that notions of fairness, discrimination, transparency and accountability are concepts in society and the law that have no obvious formal analog. But our algorithms speak the language of mathematics. And so if we want to encode our beliefs into automated decision systems, we must formalize them precisely, while still capturing the natural imprecision and ambiguity in these ideas. In this talk, I'll survey the new field of fairness, accountability and transparency in computer science. I'll focus on how we formalize these notions, how they connect to traditional notions in theoretical computer science, and even describe some impossibility results that arise from this formalization. I'll conclude with some open questions.

Cite as

Suresh Venkatasubramanian. Computational Philosophy: On Fairness in Automated Decision Making. In 28th International Symposium on Algorithms and Computation (ISAAC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 92, p. 2:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{venkatasubramanian:LIPIcs.ISAAC.2017.2,
  author =	{Venkatasubramanian, Suresh},
  title =	{{Computational Philosophy: On Fairness in Automated Decision Making}},
  booktitle =	{28th International Symposium on Algorithms and Computation (ISAAC 2017)},
  pages =	{2:1--2:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-054-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{92},
  editor =	{Okamoto, Yoshio and Tokuyama, Takeshi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2017.2},
  URN =		{urn:nbn:de:0030-drops-82747},
  doi =		{10.4230/LIPIcs.ISAAC.2017.2},
  annote =	{Keywords: fairness, transparency, accountability, impossibility results}
}
Document
Streaming Verification of Graph Properties

Authors: Amirali Abdullah, Samira Daruki, Chitradeep Dutta Roy, and Suresh Venkatasubramanian

Published in: LIPIcs, Volume 64, 27th International Symposium on Algorithms and Computation (ISAAC 2016)


Abstract
Streaming interactive proofs (SIPs) are a framework for outsourced computation. A computationally limited streaming client (the verifier) hands over a large data set to an untrusted server (the prover) in the cloud and the two parties run a protocol to confirm the correctness of result with high probability. SIPs are particularly interesting for problems that are hard to solve (or even approximate) well in a streaming setting. The most notable of these problems is finding maximum matchings, which has received intense interest in recent years but has strong lower bounds even for constant factor approximations. In this paper, we present efficient streaming interactive proofs that can verify maximum matchings exactly. Our results cover all flavors of matchings (bipartite/non-bipartite and weighted). In addition, we also present streaming verifiers for approximate metric TSP. In particular, these are the first efficient results for weighted matchings and for metric TSP in any streaming verification model.

Cite as

Amirali Abdullah, Samira Daruki, Chitradeep Dutta Roy, and Suresh Venkatasubramanian. Streaming Verification of Graph Properties. In 27th International Symposium on Algorithms and Computation (ISAAC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 64, pp. 3:1-3:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{abdullah_et_al:LIPIcs.ISAAC.2016.3,
  author =	{Abdullah, Amirali and Daruki, Samira and Roy, Chitradeep Dutta and Venkatasubramanian, Suresh},
  title =	{{Streaming Verification of Graph Properties}},
  booktitle =	{27th International Symposium on Algorithms and Computation (ISAAC 2016)},
  pages =	{3:1--3:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-026-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{64},
  editor =	{Hong, Seok-Hee},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2016.3},
  URN =		{urn:nbn:de:0030-drops-67730},
  doi =		{10.4230/LIPIcs.ISAAC.2016.3},
  annote =	{Keywords: streaming interactive proofs, verification, matching, travelling salesman problem, graph algorithms}
}
Document
Verifiable Stream Computation and Arthur–Merlin Communication

Authors: Amit Chakrabarti, Graham Cormode, Andrew McGregor, Justin Thaler, and Suresh Venkatasubramanian

Published in: LIPIcs, Volume 33, 30th Conference on Computational Complexity (CCC 2015)


Abstract
In the setting of streaming interactive proofs (SIPs), a client (verifier) needs to compute a given function on a massive stream of data, arriving online, but is unable to store even a small fraction of the data. It outsources the processing to a third party service (prover), but is unwilling to blindly trust answers returned by this service. Thus, the service cannot simply supply the desired answer; it must convince the verifier of its correctness via a short interaction after the stream has been seen. In this work we study "barely interactive" SIPs. Specifically, we show that two or three rounds of interaction suffice to solve several query problems - including Index, Median, Nearest Neighbor Search, Pattern Matching, and Range Counting - with polylogarithmic space and communication costs. Such efficiency with O(1) rounds of interaction was thought to be impossible based on previous work. On the other hand, we initiate a formal study of the limitations of constant-round SIPs by introducing a new hierarchy of communication models called Online Interactive Proofs (OIPs). The online nature of these models is analogous to the streaming restriction placed upon the verifier in an SIP. We give upper and lower bounds that (1) characterize, up to quadratic blowups, every finite level of the OIP hierarchy in terms of other well-known communication complexity classes, (2) separate the first four levels of the hierarchy, and (3) reveal that the hierarchy collapses to the fourth level. Our study of OIPs reveals marked contrasts and some parallels with the classic Turing Machine theory of interactive proofs, establishes limits on the power of existing techniques for developing constant-round SIPs, and provides a new characterization of (non-online) Arthur-Merlin communication in terms of an online model.

Cite as

Amit Chakrabarti, Graham Cormode, Andrew McGregor, Justin Thaler, and Suresh Venkatasubramanian. Verifiable Stream Computation and Arthur–Merlin Communication. In 30th Conference on Computational Complexity (CCC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 33, pp. 217-243, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{chakrabarti_et_al:LIPIcs.CCC.2015.217,
  author =	{Chakrabarti, Amit and Cormode, Graham and McGregor, Andrew and Thaler, Justin and Venkatasubramanian, Suresh},
  title =	{{Verifiable Stream Computation and Arthur–Merlin Communication}},
  booktitle =	{30th Conference on Computational Complexity (CCC 2015)},
  pages =	{217--243},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-81-1},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{33},
  editor =	{Zuckerman, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2015.217},
  URN =		{urn:nbn:de:0030-drops-50680},
  doi =		{10.4230/LIPIcs.CCC.2015.217},
  annote =	{Keywords: Arthur-Merlin communication complexity, streaming interactive proofs}
}
Document
Clustering With Center Constraints

Authors: Parinya Chalermsook and Suresh Venkatasubramanian

Published in: LIPIcs, Volume 24, IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2013)


Abstract
In the classical maximum independent set problem, we are given a graph G of "conflicts" and are asked to find a maximum conflict-free subset. If we think of the remaining nodes as being "assigned" (at unit cost each) to one of these independent vertices and ask for an assignment of minimum cost, this yields the vertex cover problem. In this paper, we consider a more general scenario where the assignment costs might be given by a distance metric d (which can be unrelated to G) on the underlying set of vertices. This problem, in addition to being a natural generalization of vertex cover and an interesting variant of the k-median problem, also has connection to constrained clustering and database repair. Understanding the relation between the conflict structure (the graph) and the distance structure (the metric) for this problem turns out to be the key to isolating its complexity. We show that when the two structures are unrelated, the problem inherits a trivial upper bound from vertex cover and provide an almost matching lower bound on hardness of approximation. We then prove a number of lower and upper bounds that depend on the relationship between the two structures, including polynomial time algorithms for special graphs.

Cite as

Parinya Chalermsook and Suresh Venkatasubramanian. Clustering With Center Constraints. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2013). Leibniz International Proceedings in Informatics (LIPIcs), Volume 24, pp. 401-412, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{chalermsook_et_al:LIPIcs.FSTTCS.2013.401,
  author =	{Chalermsook, Parinya and Venkatasubramanian, Suresh},
  title =	{{Clustering With Center Constraints}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2013)},
  pages =	{401--412},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-64-4},
  ISSN =	{1868-8969},
  year =	{2013},
  volume =	{24},
  editor =	{Seth, Anil and Vishnoi, Nisheeth K.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2013.401},
  URN =		{urn:nbn:de:0030-drops-43898},
  doi =		{10.4230/LIPIcs.FSTTCS.2013.401},
  annote =	{Keywords: Clustering, vertex cover, approximation algorithms}
}
  • Refine by Type
  • 13 Document/PDF
  • 8 Document/HTML

  • Refine by Publication Year
  • 8 2025
  • 1 2018
  • 1 2017
  • 1 2016
  • 1 2015
  • Show More...

  • Refine by Author
  • 5 Venkatasubramanian, Suresh
  • 2 Daruki, Samira
  • 2 Nusser, André
  • 1 Abdullah, Amirali
  • 1 Alghamdi, Wael
  • Show More...

  • Refine by Series/Journal
  • 13 LIPIcs

  • Refine by Classification
  • 3 Theory of computation → Computational geometry
  • 1 Information systems → Data management systems
  • 1 Information systems → Data structures
  • 1 Mathematics of computing → Combinatorial algorithms
  • 1 Mathematics of computing → Information theory
  • Show More...

  • Refine by Keyword
  • 2 approximation algorithms
  • 2 shape matching
  • 2 streaming interactive proofs
  • 1 Arthur-Merlin communication complexity
  • 1 B-tree
  • Show More...

Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail