Search Results

Documents authored by Kumar, T. K. Satish



Kumar, T. K. Satish

Document
FastMapSVM for Predicting CSP Satisfiability

Authors: Kexin Zheng, Ang Li, Han Zhang, and T. K. Satish Kumar

Published in: LIPIcs, Volume 280, 29th International Conference on Principles and Practice of Constraint Programming (CP 2023)


Abstract
Recognizing the satisfiability of Constraint Satisfaction Problems (CSPs) is NP-hard. Although several Machine Learning (ML) approaches have attempted this task by casting it as a binary classification problem, they have had only limited success for a variety of challenging reasons. First, the NP-hardness of the task does not make it amenable to straightforward approaches. Second, CSPs come in various forms and sizes while many ML algorithms impose the same form and size on their training and test instances. Third, the representation of a CSP instance is not unique since the variables and their domain values are unordered. In this paper, we propose FastMapSVM, a recently developed ML framework that leverages a distance function between pairs of objects. We define a novel distance function between two CSP instances using maxflow computations. This distance function is well defined for CSPs of different sizes. It is also invariant to the ordering on the variables and their domain values. Therefore, our framework has broader applicability compared to other approaches. We discuss various representational and combinatorial advantages of FastMapSVM. Through experiments, we also show that it outperforms other state-of-the-art ML approaches.

Cite as

Kexin Zheng, Ang Li, Han Zhang, and T. K. Satish Kumar. FastMapSVM for Predicting CSP Satisfiability. In 29th International Conference on Principles and Practice of Constraint Programming (CP 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 280, pp. 40:1-40:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{zheng_et_al:LIPIcs.CP.2023.40,
  author =	{Zheng, Kexin and Li, Ang and Zhang, Han and Kumar, T. K. Satish},
  title =	{{FastMapSVM for Predicting CSP Satisfiability}},
  booktitle =	{29th International Conference on Principles and Practice of Constraint Programming (CP 2023)},
  pages =	{40:1--40:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-300-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{280},
  editor =	{Yap, Roland H. C.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2023.40},
  URN =		{urn:nbn:de:0030-drops-190775},
  doi =		{10.4230/LIPIcs.CP.2023.40},
  annote =	{Keywords: Constraint Satisfaction Problems, Machine Learning, FastMapSVM}
}
Document
Trajectory Optimization for Safe Navigation in Maritime Traffic Using Historical Data

Authors: Chaithanya Basrur, Arambam James Singh, Arunesh Sinha, Akshat Kumar, and T. K. Satish Kumar

Published in: LIPIcs, Volume 235, 28th International Conference on Principles and Practice of Constraint Programming (CP 2022)


Abstract
Increasing maritime trade often results in congestion in busy ports, thereby necessitating planning methods to avoid close quarter risky situations among vessels. Rapid digitization and automation of port operations and vessel navigation provide unique opportunities for significantly improving navigation safety. Our key contributions are as follows. First, given a set of future candidate trajectories for vessels in a traffic hotspot zone, we develop a multiagent trajectory optimization method to choose trajectories that result in the best overall close quarter risk reduction. Our novel MILP-based optimization method is more than an order-of-magnitude faster than a standard MILP for this problem, and runs in near real-time. Second, although automation has improved in maritime operations, current vessel traffic systems (in our case study of a busy Asian port) predict only a single future trajectory of a vessel based on linear extrapolation. Therefore, using historical data we learn a generative model that predicts multiple possible future trajectories of each vessel in a given traffic hotspot, reflecting past vessel movement patterns, and integrate it with our trajectory optimizer. Third, we validate our trajectory optimization and generative model extensively using a real world maritime traffic dataset containing 6 million Automated Identification System (AIS) data records detailing vessel movements over 1.5 years from one of the world’s busiest ports, demonstrating effective risk reduction.

Cite as

Chaithanya Basrur, Arambam James Singh, Arunesh Sinha, Akshat Kumar, and T. K. Satish Kumar. Trajectory Optimization for Safe Navigation in Maritime Traffic Using Historical Data. In 28th International Conference on Principles and Practice of Constraint Programming (CP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 235, pp. 5:1-5:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{basrur_et_al:LIPIcs.CP.2022.5,
  author =	{Basrur, Chaithanya and Singh, Arambam James and Sinha, Arunesh and Kumar, Akshat and Kumar, T. K. Satish},
  title =	{{Trajectory Optimization for Safe Navigation in Maritime Traffic Using Historical Data}},
  booktitle =	{28th International Conference on Principles and Practice of Constraint Programming (CP 2022)},
  pages =	{5:1--5:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-240-2},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{235},
  editor =	{Solnon, Christine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2022.5},
  URN =		{urn:nbn:de:0030-drops-166341},
  doi =		{10.4230/LIPIcs.CP.2022.5},
  annote =	{Keywords: Multi-Agent Path Coordination, Maritime Traffic Control}
}
Document
Differential Programming via OR Methods

Authors: Shannon Sweitzer and T. K. Satish Kumar

Published in: LIPIcs, Volume 210, 27th International Conference on Principles and Practice of Constraint Programming (CP 2021)


Abstract
Systems of ordinary differential equations (ODEs) and partial differential equations (PDEs) are extensively used in many fields of science, including physics, biochemistry, nonlinear control, and dynamical systems. On the one hand, analytical methods for solving systems of ODEs/PDEs mostly remain an art and are largely insufficient for complex systems. On the other hand, numerical approximation methods do not yield a viable analytical form of the solution that is often required for downstream tasks. In this paper, we present an approximate approach for solving systems of ODEs/PDEs analytically using solvers like Gurobi developed in Operations Research (OR). Our main idea is to represent entire functions as Bézier curves/surfaces with to-be-determined control points. The ODEs/PDEs as well as their boundary conditions can then be reformulated as constraints on these control points. In many cases, this reformulation yields quadratic programming problems (QPPs) that can be solved in polynomial time. It also allows us to reason about inequalities. We demonstrate the success of our approach on several interesting classes of ODEs/PDEs.

Cite as

Shannon Sweitzer and T. K. Satish Kumar. Differential Programming via OR Methods. In 27th International Conference on Principles and Practice of Constraint Programming (CP 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 210, pp. 53:1-53:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{sweitzer_et_al:LIPIcs.CP.2021.53,
  author =	{Sweitzer, Shannon and Kumar, T. K. Satish},
  title =	{{Differential Programming via OR Methods}},
  booktitle =	{27th International Conference on Principles and Practice of Constraint Programming (CP 2021)},
  pages =	{53:1--53:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-211-2},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{210},
  editor =	{Michel, Laurent D.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2021.53},
  URN =		{urn:nbn:de:0030-drops-153445},
  doi =		{10.4230/LIPIcs.CP.2021.53},
  annote =	{Keywords: Differential Programming, Operations Research, B\'{e}zier Curves}
}

Kumar, Rohit

Document
DAISIM: A Computational Simulator for the MakerDAO Stablecoin

Authors: Shreyas Bhat, Ayten Betul Kahya, Bhaskar Krishnamachari, and Rohit Kumar

Published in: OASIcs, Volume 92, 4th International Symposium on Foundations and Applications of Blockchain 2021 (FAB 2021)


Abstract
We present a computational simulation of the single-collateral DAI stablecoin launched by the MakerDAO project in 2017. At the core of the simulation is a model of cryptocurrency investors acting as rational Markowitz mean-variance portfolio optimizers, with heterogeneous risk tolerance. The simulator, called DAISIM, incorporates automated order matching and price update mechanisms to determine the DAI price. We use the simulator to evaluate how the single-collateral DAI price, as well as portfolio allocations, vary for a given population of investors as a function of exogenous parameters such as the price of ETH and various system parameters including stability rate and transaction fee. DAISIM is being made available as open-source and may be useful in evaluating other similar projects.

Cite as

Shreyas Bhat, Ayten Betul Kahya, Bhaskar Krishnamachari, and Rohit Kumar. DAISIM: A Computational Simulator for the MakerDAO Stablecoin. In 4th International Symposium on Foundations and Applications of Blockchain 2021 (FAB 2021). Open Access Series in Informatics (OASIcs), Volume 92, pp. 3:1-3:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bhat_et_al:OASIcs.FAB.2021.3,
  author =	{Bhat, Shreyas and Kahya, Ayten Betul and Krishnamachari, Bhaskar and Kumar, Rohit},
  title =	{{DAISIM: A Computational Simulator for the MakerDAO Stablecoin}},
  booktitle =	{4th International Symposium on Foundations and Applications of Blockchain 2021 (FAB 2021)},
  pages =	{3:1--3:13},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-196-2},
  ISSN =	{2190-6807},
  year =	{2021},
  volume =	{92},
  editor =	{Gramoli, Vincent and Sadoghi, Mohammad},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.FAB.2021.3},
  URN =		{urn:nbn:de:0030-drops-139899},
  doi =		{10.4230/OASIcs.FAB.2021.3},
  annote =	{Keywords: Stablecoin, Simulator, MakerDAO}
}

Kumar, Amit

Document
Complete Volume
LIPIcs, Volume 317, APPROX/RANDOM 2024, Complete Volume

Authors: Amit Kumar and Noga Ron-Zewi

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
LIPIcs, Volume 317, APPROX/RANDOM 2024, Complete Volume

Cite as

Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 1-1496, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Proceedings{kumar_et_al:LIPIcs.APPROX/RANDOM.2024,
  title =	{{LIPIcs, Volume 317, APPROX/RANDOM 2024, Complete Volume}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{1--1496},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024},
  URN =		{urn:nbn:de:0030-drops-209921},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024},
  annote =	{Keywords: LIPIcs, Volume 317, APPROX/RANDOM 2024, Complete Volume}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Amit Kumar and Noga Ron-Zewi

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 0:i-0:xxvi, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{kumar_et_al:LIPIcs.APPROX/RANDOM.2024.0,
  author =	{Kumar, Amit and Ron-Zewi, Noga},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{0:i--0:xxvi},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.0},
  URN =		{urn:nbn:de:0030-drops-209933},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Track A: Algorithms, Complexity and Games
Random Separating Hyperplane Theorem and Learning Polytopes

Authors: Chiranjib Bhattacharyya, Ravindran Kannan, and Amit Kumar

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
The Separating Hyperplane theorem is a fundamental result in Convex Geometry with myriad applications. The theorem asserts that for a point a not in a closed convex set K, there is a hyperplane with K on one side and a strictly on the other side. Our first result, Random Separating Hyperplane Theorem (RSH), is a strengthening of this for polytopes. RSH asserts that if the distance between a and a polytope K with k vertices and unit diameter in ℜ^d is at least δ, where δ is a fixed constant in (0,1), then a randomly chosen hyperplane separates a and K with probability at least 1/poly(k) and margin at least Ω (δ/√d). RSH has algorithmic applications in learning polytopes. We consider a fundamental problem, denoted the "Hausdorff problem", of learning a unit diameter polytope K within Hausdorff distance δ, given an optimization oracle for K. Using RSH, we show that with polynomially many random queries to the optimization oracle, K can be approximated within error O(δ). To our knowledge, this is the first provable algorithm for the Hausdorff Problem in this setting. Building on this result, we show that if the vertices of K are well-separated, then an optimization oracle can be used to generate a list of points, each within distance O(δ) of K, with the property that the list contains a point close to each vertex of K. Further, we show how to prune this list to generate a (unique) approximation to each vertex of the polytope. We prove that in many latent variable settings, e.g., topic modeling, LDA, optimization oracles do exist provided we project to a suitable SVD subspace. Thus, our work yields the first efficient algorithm for finding approximations to the vertices of the latent polytope under the well-separatedness assumption. This assumption states that each vertex of K is far from the convex hull of the remaining vertices of K, and is much weaker than other assumptions behind algorithms in the literature which find vertices of the latent polytope.

Cite as

Chiranjib Bhattacharyya, Ravindran Kannan, and Amit Kumar. Random Separating Hyperplane Theorem and Learning Polytopes. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 25:1-25:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bhattacharyya_et_al:LIPIcs.ICALP.2024.25,
  author =	{Bhattacharyya, Chiranjib and Kannan, Ravindran and Kumar, Amit},
  title =	{{Random Separating Hyperplane Theorem and Learning Polytopes}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{25:1--25:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.25},
  URN =		{urn:nbn:de:0030-drops-201687},
  doi =		{10.4230/LIPIcs.ICALP.2024.25},
  annote =	{Keywords: Separating Hyperplane Theorem, Learning Polytopes, Optimization Oracles}
}
Document
FPT Approximation for Capacitated Sum of Radii

Authors: Ragesh Jaiswal, Amit Kumar, and Jatin Yadav

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
We consider the capacitated clustering problem in general metric spaces where the goal is to identify k clusters and minimize the sum of the radii of the clusters (we call this the Capacitated k-sumRadii problem). We are interested in fixed-parameter tractable (FPT) approximation algorithms where the running time is of the form f(k) ⋅ poly(n), where f(k) can be an exponential function of k and n is the number of points in the input. In the uniform capacity case, Bandyapadhyay et al. recently gave a 4-approximation algorithm for this problem. Our first result improves this to an FPT 3-approximation and extends to a constant factor approximation for any L_p norm of the cluster radii. In the general capacities version, Bandyapadhyay et al. gave an FPT 15-approximation algorithm. We extend their framework to give an FPT (4 + √13)-approximation algorithm for this problem. Our framework relies on a novel idea of identifying approximations to optimal clusters by carefully pruning points from an initial candidate set of points. This is in contrast to prior results that rely on guessing suitable points and building balls of appropriate radii around them. On the hardness front, we show that assuming the Exponential Time Hypothesis, there is a constant c > 1 such that any c-approximation algorithm for the non-uniform capacity version of this problem requires running time 2^Ω(k/polylog(k)).

Cite as

Ragesh Jaiswal, Amit Kumar, and Jatin Yadav. FPT Approximation for Capacitated Sum of Radii. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 65:1-65:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{jaiswal_et_al:LIPIcs.ITCS.2024.65,
  author =	{Jaiswal, Ragesh and Kumar, Amit and Yadav, Jatin},
  title =	{{FPT Approximation for Capacitated Sum of Radii}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{65:1--65:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.65},
  URN =		{urn:nbn:de:0030-drops-195937},
  doi =		{10.4230/LIPIcs.ITCS.2024.65},
  annote =	{Keywords: Approximation algorithm, parameterized algorithm, clustering}
}
Document
Clustering What Matters in Constrained Settings: Improved Outlier to Outlier-Free Reductions

Authors: Ragesh Jaiswal and Amit Kumar

Published in: LIPIcs, Volume 283, 34th International Symposium on Algorithms and Computation (ISAAC 2023)


Abstract
Constrained clustering problems generalize classical clustering formulations, e.g., k-median, k-means, by imposing additional constraints on the feasibility of a clustering. There has been significant recent progress in obtaining approximation algorithms for these problems, both in the metric and the Euclidean settings. However, the outlier version of these problems, where the solution is allowed to leave out m points from the clustering, is not well understood. In this work, we give a general framework for reducing the outlier version of a constrained k-median or k-means problem to the corresponding outlier-free version with only (1+ε)-loss in the approximation ratio. The reduction is obtained by mapping the original instance of the problem to f(k, m, ε) instances of the outlier-free version, where f(k, m, ε) = ((k+m)/ε)^O(m). As specific applications, we get the following results: - First FPT (in the parameters k and m) (1+ε)-approximation algorithm for the outlier version of capacitated k-median and k-means in Euclidean spaces with hard capacities. - First FPT (in the parameters k and m) (3+ε) and (9+ε) approximation algorithms for the outlier version of capacitated k-median and k-means, respectively, in general metric spaces with hard capacities. - First FPT (in the parameters k and m) (2-δ)-approximation algorithm for the outlier version of the k-median problem under the Ulam metric. Our work generalizes the results of Bhattacharya et al. and Agrawal et al. to a larger class of constrained clustering problems. Further, our reduction works for arbitrary metric spaces and so can extend clustering algorithms for outlier-free versions in both Euclidean and arbitrary metric spaces.

Cite as

Ragesh Jaiswal and Amit Kumar. Clustering What Matters in Constrained Settings: Improved Outlier to Outlier-Free Reductions. In 34th International Symposium on Algorithms and Computation (ISAAC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 283, pp. 41:1-41:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{jaiswal_et_al:LIPIcs.ISAAC.2023.41,
  author =	{Jaiswal, Ragesh and Kumar, Amit},
  title =	{{Clustering What Matters in Constrained Settings: Improved Outlier to Outlier-Free Reductions}},
  booktitle =	{34th International Symposium on Algorithms and Computation (ISAAC 2023)},
  pages =	{41:1--41:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-289-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{283},
  editor =	{Iwata, Satoru and Kakimura, Naonori},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2023.41},
  URN =		{urn:nbn:de:0030-drops-193433},
  doi =		{10.4230/LIPIcs.ISAAC.2023.41},
  annote =	{Keywords: clustering, constrained, outlier}
}
Document
APPROX
Efficient Algorithms and Hardness Results for the Weighted k-Server Problem

Authors: Anupam Gupta, Amit Kumar, and Debmalya Panigrahi

Published in: LIPIcs, Volume 275, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)


Abstract
In this paper, we study the weighted k-server problem on the uniform metric in both the offline and online settings. We start with the offline setting. In contrast to the (unweighted) k-server problem which has a polynomial-time solution using min-cost flows, there are strong computational lower bounds for the weighted k-server problem, even on the uniform metric. Specifically, we show that assuming the unique games conjecture, there are no polynomial-time algorithms with a sub-polynomial approximation factor, even if we use c-resource augmentation for c < 2. Furthermore, if we consider the natural LP relaxation of the problem, then obtaining a bounded integrality gap requires us to use at least 𝓁 resource augmentation, where 𝓁 is the number of distinct server weights. We complement these results by obtaining a constant-approximation algorithm via LP rounding, with a resource augmentation of (2+ε)𝓁 for any constant ε > 0. In the online setting, an exp(k) lower bound is known for the competitive ratio of any randomized algorithm for the weighted k-server problem on the uniform metric. In contrast, we show that 2𝓁-resource augmentation can bring the competitive ratio down by an exponential factor to only O(𝓁² log 𝓁). Our online algorithm uses the two-stage approach of first obtaining a fractional solution using the online primal-dual framework, and then rounding it online.

Cite as

Anupam Gupta, Amit Kumar, and Debmalya Panigrahi. Efficient Algorithms and Hardness Results for the Weighted k-Server Problem. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 275, pp. 12:1-12:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{gupta_et_al:LIPIcs.APPROX/RANDOM.2023.12,
  author =	{Gupta, Anupam and Kumar, Amit and Panigrahi, Debmalya},
  title =	{{Efficient Algorithms and Hardness Results for the Weighted k-Server Problem}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2023)},
  pages =	{12:1--12:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-296-9},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{275},
  editor =	{Megow, Nicole and Smith, Adam},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2023.12},
  URN =		{urn:nbn:de:0030-drops-188375},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2023.12},
  annote =	{Keywords: Online Algorithms, Weighted k-server, Integrality Gap, Hardness}
}
Document
APPROX
Bag-Of-Tasks Scheduling on Related Machines

Authors: Anupam Gupta, Amit Kumar, and Sahil Singla

Published in: LIPIcs, Volume 207, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)


Abstract
We consider online scheduling to minimize weighted completion time on related machines, where each job consists of several tasks that can be concurrently executed. A job gets completed when all its component tasks finish. We obtain an O(K³ log² K)-competitive algorithm in the non-clairvoyant setting, where K denotes the number of distinct machine speeds. The analysis is based on dual-fitting on a precedence-constrained LP relaxation that may be of independent interest.

Cite as

Anupam Gupta, Amit Kumar, and Sahil Singla. Bag-Of-Tasks Scheduling on Related Machines. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 207, pp. 3:1-3:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{gupta_et_al:LIPIcs.APPROX/RANDOM.2021.3,
  author =	{Gupta, Anupam and Kumar, Amit and Singla, Sahil},
  title =	{{Bag-Of-Tasks Scheduling on Related Machines}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021)},
  pages =	{3:1--3:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-207-5},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{207},
  editor =	{Wootters, Mary and Sanit\`{a}, Laura},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2021.3},
  URN =		{urn:nbn:de:0030-drops-146967},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2021.3},
  annote =	{Keywords: approximation algorithms, scheduling, bag-of-tasks, related machines}
}
Document
FPT Approximation for Constrained Metric k-Median/Means

Authors: Dishant Goyal, Ragesh Jaiswal, and Amit Kumar

Published in: LIPIcs, Volume 180, 15th International Symposium on Parameterized and Exact Computation (IPEC 2020)


Abstract
The Metric k-median problem over a metric space (𝒳, d) is defined as follows: given a set L ⊆ 𝒳 of facility locations and a set C ⊆ 𝒳 of clients, open a set F ⊆ L of k facilities such that the total service cost, defined as Φ(F, C) := ∑_{x ∈ C} min_{f ∈ F} d(x, f), is minimised. The metric k-means problem is defined similarly using squared distances (i.e., d²(., .) instead of d(., .)). In many applications there are additional constraints that any solution needs to satisfy. For example, to balance the load among the facilities in resource allocation problems, a capacity u is imposed on every facility. That is, no more than u clients can be assigned to any facility. This problem is known as the capacitated k-means/k-median problem. Likewise, various other applications have different constraints, which give rise to different constrained versions of the problem such as r-gather, fault-tolerant, outlier k-means/k-median problem. Surprisingly, for many of these constrained problems, no constant-approximation algorithm is known. Moreover, the unconstrained problem itself is known [Marek Adamczyk et al., 2019] to be W[2]-hard when parameterized by k. We give FPT algorithms with constant approximation guarantee for a range of constrained k-median/means problems. For some of the constrained problems, ours is the first constant factor approximation algorithm whereas for others, we improve or match the approximation guarantee of previous works. We work within the unified framework of Ding and Xu [Ding and Xu, 2015] that allows us to simultaneously obtain algorithms for a range of constrained problems. In particular, we obtain a (3+ε)-approximation and (9+ε)-approximation for the constrained versions of the k-median and k-means problem respectively in FPT time. In many practical settings of the k-median/means problem, one is allowed to open a facility at any client location, i.e., C ⊆ L. For this special case, our algorithm gives a (2+ε)-approximation and (4+ε)-approximation for the constrained versions of k-median and k-means problem respectively in FPT time. Since our algorithm is based on simple sampling technique, it can also be converted to a constant-pass log-space streaming algorithm. In particular, here are some of the main highlights of this work: 1) For the uniform capacitated k-median/means problems our results matches previously known results of Addad et al. [Vincent Cohen-Addad and Jason Li, 2019]. 2) For the r-gather k-median/means problem (clustering with lower bound on the size of clusters), our FPT approximation bounds are better than what was previously known. 3) Our approximation bounds for the fault-tolerant, outlier, and uncertain versions is better than all previously known results, albeit in FPT time. 4) For certain constrained settings such as chromatic, l-diversity, and semi-supervised k-median/means, we obtain the first constant factor approximation algorithms to the best of our knowledge. 5) Since our algorithms are based on a simple sampling based approach, we also obtain constant-pass log-space streaming algorithms for most of the above-mentioned problems.

Cite as

Dishant Goyal, Ragesh Jaiswal, and Amit Kumar. FPT Approximation for Constrained Metric k-Median/Means. In 15th International Symposium on Parameterized and Exact Computation (IPEC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 180, pp. 14:1-14:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{goyal_et_al:LIPIcs.IPEC.2020.14,
  author =	{Goyal, Dishant and Jaiswal, Ragesh and Kumar, Amit},
  title =	{{FPT Approximation for Constrained Metric k-Median/Means}},
  booktitle =	{15th International Symposium on Parameterized and Exact Computation (IPEC 2020)},
  pages =	{14:1--14:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-172-6},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{180},
  editor =	{Cao, Yixin and Pilipczuk, Marcin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2020.14},
  URN =		{urn:nbn:de:0030-drops-133170},
  doi =		{10.4230/LIPIcs.IPEC.2020.14},
  annote =	{Keywords: k-means, k-median, approximation algorithms, parameterised algorithms}
}
Document
On Sampling Based Algorithms for k-Means

Authors: Anup Bhattacharya, Dishant Goyal, Ragesh Jaiswal, and Amit Kumar

Published in: LIPIcs, Volume 182, 40th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2020)


Abstract
We generalise the results of Bhattacharya et al. [Bhattacharya et al., 2018] for the list-k-means problem defined as - for a (unknown) partition X₁, ..., X_k of the dataset X ⊆ ℝ^d, find a list of k-center-sets (each element in the list is a set of k centers) such that at least one of k-center-sets {c₁, ..., c_k} in the list gives an (1+ε)-approximation with respect to the cost function min_{permutation π} [∑_{i = 1}^{k} ∑_{x ∈ X_i} ||x - c_{π(i)}||²]. The list-k-means problem is important for the constrained k-means problem since algorithms for the former can be converted to {PTAS} for various versions of the latter. The algorithm for the list-k-means problem by Bhattacharya et al. is a D²-sampling based algorithm that runs in k iterations. Making use of a constant factor solution for the (classical or unconstrained) k-means problem, we generalise the algorithm of Bhattacharya et al. in two ways - (i) for any fixed set X_{j₁}, ..., X_{j_t} of t ≤ k clusters, the algorithm produces a list of (k/(ε))^{O(t/(ε))} t-center sets such that (w.h.p.) at least one of them is good for X_{j₁}, ..., X_{j_t}, and (ii) the algorithm runs in a single iteration. Following are the consequences of our generalisations: 1) Faster PTAS under stability and a parameterised reduction: Property (i) of our generalisation is useful in scenarios where finding good centers becomes easier once good centers for a few "bad" clusters have been chosen. One such case is clustering under stability of Awasthi et al. [Awasthi et al., 2010] where the number of such bad clusters is a constant. Using property (i), we significantly improve the running time of their algorithm from O(dn³) (k log{n})^{poly(1/(β), 1/(ε))} to O (dn³ (k/(ε)) ^{O(1/βε²)}). Another application is a parameterised reduction from the outlier version of k-means to the classical one where the bad clusters are the outliers. 2) Streaming algorithms: The sampling algorithm running in a single iteration (i.e., property (ii)) allows us to design a constant-pass, logspace streaming algorithm for the list-k-means problem. This can be converted to a constant-pass, logspace streaming PTAS for various constrained versions of the k-means problem. In particular, this gives a 3-pass, polylog-space streaming PTAS for the constrained binary k-means problem which in turn gives a 4-pass, polylog-space streaming PTAS for the generalised binary 𝓁₀-rank-r approximation problem. This is the first constant pass, polylog-space streaming algorithm for either of the two problems. Coreset based techniques, which is another approach for designing streaming algorithms in general, is not known to work for the constrained binary k-means problem to the best of our knowledge.

Cite as

Anup Bhattacharya, Dishant Goyal, Ragesh Jaiswal, and Amit Kumar. On Sampling Based Algorithms for k-Means. In 40th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 182, pp. 13:1-13:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{bhattacharya_et_al:LIPIcs.FSTTCS.2020.13,
  author =	{Bhattacharya, Anup and Goyal, Dishant and Jaiswal, Ragesh and Kumar, Amit},
  title =	{{On Sampling Based Algorithms for k-Means}},
  booktitle =	{40th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2020)},
  pages =	{13:1--13:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-174-0},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{182},
  editor =	{Saxena, Nitin and Simon, Sunil},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2020.13},
  URN =		{urn:nbn:de:0030-drops-132549},
  doi =		{10.4230/LIPIcs.FSTTCS.2020.13},
  annote =	{Keywords: k-means, low rank approximation}
}
Document
Online Carpooling Using Expander Decompositions

Authors: Anupam Gupta, Ravishankar Krishnaswamy, Amit Kumar, and Sahil Singla

Published in: LIPIcs, Volume 182, 40th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2020)


Abstract
We consider the online carpooling problem: given n vertices, a sequence of edges arrive over time. When an edge e_t = (u_t, v_t) arrives at time step t, the algorithm must orient the edge either as v_t → u_t or u_t → v_t, with the objective of minimizing the maximum discrepancy of any vertex, i.e., the absolute difference between its in-degree and out-degree. Edges correspond to pairs of persons wanting to ride together, and orienting denotes designating the driver. The discrepancy objective then corresponds to every person driving close to their fair share of rides they participate in. In this paper, we design efficient algorithms which can maintain polylog(n,T) maximum discrepancy (w.h.p) over any sequence of T arrivals, when the arriving edges are sampled independently and uniformly from any given graph G. This provides the first polylogarithmic bounds for the online (stochastic) carpooling problem. Prior to this work, the best known bounds were O(√{n log n})-discrepancy for any adversarial sequence of arrivals, or O(log log n)-discrepancy bounds for the stochastic arrivals when G is the complete graph. The technical crux of our paper is in showing that the simple greedy algorithm, which has provably good discrepancy bounds when the arriving edges are drawn uniformly at random from the complete graph, also has polylog discrepancy when G is an expander graph. We then combine this with known expander-decomposition results to design our overall algorithm.

Cite as

Anupam Gupta, Ravishankar Krishnaswamy, Amit Kumar, and Sahil Singla. Online Carpooling Using Expander Decompositions. In 40th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 182, pp. 23:1-23:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{gupta_et_al:LIPIcs.FSTTCS.2020.23,
  author =	{Gupta, Anupam and Krishnaswamy, Ravishankar and Kumar, Amit and Singla, Sahil},
  title =	{{Online Carpooling Using Expander Decompositions}},
  booktitle =	{40th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2020)},
  pages =	{23:1--23:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-174-0},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{182},
  editor =	{Saxena, Nitin and Simon, Sunil},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2020.23},
  URN =		{urn:nbn:de:0030-drops-132647},
  doi =		{10.4230/LIPIcs.FSTTCS.2020.23},
  annote =	{Keywords: Online Algorithms, Discrepancy Minimization, Carpooling}
}
Document
Track A: Algorithms, Complexity and Games
Tight FPT Approximations for k-Median and k-Means

Authors: Vincent Cohen-Addad, Anupam Gupta, Amit Kumar, Euiwoong Lee, and Jason Li

Published in: LIPIcs, Volume 132, 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)


Abstract
We investigate the fine-grained complexity of approximating the classical k-Median/k-Means clustering problems in general metric spaces. We show how to improve the approximation factors to (1+2/e+epsilon) and (1+8/e+epsilon) respectively, using algorithms that run in fixed-parameter time. Moreover, we show that we cannot do better in FPT time, modulo recent complexity-theoretic conjectures.

Cite as

Vincent Cohen-Addad, Anupam Gupta, Amit Kumar, Euiwoong Lee, and Jason Li. Tight FPT Approximations for k-Median and k-Means. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 42:1-42:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{cohenaddad_et_al:LIPIcs.ICALP.2019.42,
  author =	{Cohen-Addad, Vincent and Gupta, Anupam and Kumar, Amit and Lee, Euiwoong and Li, Jason},
  title =	{{Tight FPT Approximations for k-Median and k-Means}},
  booktitle =	{46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
  pages =	{42:1--42:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-109-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{132},
  editor =	{Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.42},
  URN =		{urn:nbn:de:0030-drops-106182},
  doi =		{10.4230/LIPIcs.ICALP.2019.42},
  annote =	{Keywords: approximation algorithms, fixed-parameter tractability, k-median, k-means, clustering, core-sets}
}
Document
Track A: Algorithms, Complexity and Games
Non-Clairvoyant Precedence Constrained Scheduling

Authors: Naveen Garg, Anupam Gupta, Amit Kumar, and Sahil Singla

Published in: LIPIcs, Volume 132, 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)


Abstract
We consider the online problem of scheduling jobs on identical machines, where jobs have precedence constraints. We are interested in the demanding setting where the jobs sizes are not known up-front, but are revealed only upon completion (the non-clairvoyant setting). Such precedence-constrained scheduling problems routinely arise in map-reduce and large-scale optimization. For minimizing the total weighted completion time, we give a constant-competitive algorithm. And for total weighted flow-time, we give an O(1/epsilon^2)-competitive algorithm under (1+epsilon)-speed augmentation and a natural "no-surprises" assumption on release dates of jobs (which we show is necessary in this context). Our algorithm proceeds by assigning virtual rates to all waiting jobs, including the ones which are dependent on other uncompleted jobs. We then use these virtual rates to decide on the actual rates of minimal jobs (i.e., jobs which do not have dependencies and hence are eligible to run). Interestingly, the virtual rates are obtained by allocating time in a fair manner, using a Eisenberg-Gale-type convex program (which we can solve optimally using a primal-dual scheme). The optimality condition of this convex program allows us to show dual-fitting proofs more easily, without having to guess and hand-craft the duals. This idea of using fair virtual rates may have broader applicability in scheduling problems.

Cite as

Naveen Garg, Anupam Gupta, Amit Kumar, and Sahil Singla. Non-Clairvoyant Precedence Constrained Scheduling. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 63:1-63:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{garg_et_al:LIPIcs.ICALP.2019.63,
  author =	{Garg, Naveen and Gupta, Anupam and Kumar, Amit and Singla, Sahil},
  title =	{{Non-Clairvoyant Precedence Constrained Scheduling}},
  booktitle =	{46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
  pages =	{63:1--63:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-109-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{132},
  editor =	{Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.63},
  URN =		{urn:nbn:de:0030-drops-106394},
  doi =		{10.4230/LIPIcs.ICALP.2019.63},
  annote =	{Keywords: Online algorithms, Scheduling, Primal-Dual analysis, Nash welfare}
}
Document
Fully-Dynamic Bin Packing with Little Repacking

Authors: Björn Feldkord, Matthias Feldotto, Anupam Gupta, Guru Guruganesh, Amit Kumar, Sören Riechers, and David Wajc

Published in: LIPIcs, Volume 107, 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)


Abstract
We study the classic bin packing problem in a fully-dynamic setting, where new items can arrive and old items may depart. We want algorithms with low asymptotic competitive ratio while repacking items sparingly between updates. Formally, each item i has a movement cost c_i >= 0, and we want to use alpha * OPT bins and incur a movement cost gamma * c_i, either in the worst case, or in an amortized sense, for alpha, gamma as small as possible. We call gamma the recourse of the algorithm. This is motivated by cloud storage applications, where fully-dynamic bin packing models the problem of data backup to minimize the number of disks used, as well as communication incurred in moving file backups between disks. Since the set of files changes over time, we could recompute a solution periodically from scratch, but this would give a high number of disk rewrites, incurring a high energy cost and possible wear and tear of the disks. In this work, we present optimal tradeoffs between number of bins used and number of items repacked, as well as natural extensions of the latter measure.

Cite as

Björn Feldkord, Matthias Feldotto, Anupam Gupta, Guru Guruganesh, Amit Kumar, Sören Riechers, and David Wajc. Fully-Dynamic Bin Packing with Little Repacking. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 51:1-51:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{feldkord_et_al:LIPIcs.ICALP.2018.51,
  author =	{Feldkord, Bj\"{o}rn and Feldotto, Matthias and Gupta, Anupam and Guruganesh, Guru and Kumar, Amit and Riechers, S\"{o}ren and Wajc, David},
  title =	{{Fully-Dynamic Bin Packing with Little Repacking}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{51:1--51:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.51},
  URN =		{urn:nbn:de:0030-drops-90556},
  doi =		{10.4230/LIPIcs.ICALP.2018.51},
  annote =	{Keywords: Bin Packing, Fully Dynamic, Recourse, Tradeoffs}
}
Document
Non-Preemptive Flow-Time Minimization via Rejections

Authors: Anupam Gupta, Amit Kumar, and Jason Li

Published in: LIPIcs, Volume 107, 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)


Abstract
We consider the online problem of minimizing weighted flow-time on unrelated machines. Although much is known about this problem in the resource-augmentation setting, these results assume that jobs can be preempted. We give the first constant-competitive algorithm for the non-preemptive setting in the rejection model. In this rejection model, we are allowed to reject an epsilon-fraction of the total weight of jobs, and compare the resulting flow-time to that of the offline optimum which is required to schedule all jobs. This is arguably the weakest assumption in which such a result is known for weighted flow-time on unrelated machines. While our algorithms are simple, we need a delicate argument to bound the flow-time. Indeed, we use the dual-fitting framework, with considerable more machinery to certify that the cost of our algorithm is within a constant of the optimum while only a small fraction of the jobs are rejected.

Cite as

Anupam Gupta, Amit Kumar, and Jason Li. Non-Preemptive Flow-Time Minimization via Rejections. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 107, pp. 70:1-70:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{gupta_et_al:LIPIcs.ICALP.2018.70,
  author =	{Gupta, Anupam and Kumar, Amit and Li, Jason},
  title =	{{Non-Preemptive Flow-Time Minimization via Rejections}},
  booktitle =	{45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
  pages =	{70:1--70:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-076-7},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{107},
  editor =	{Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.70},
  URN =		{urn:nbn:de:0030-drops-90740},
  doi =		{10.4230/LIPIcs.ICALP.2018.70},
  annote =	{Keywords: Scheduling, Rejection, Unrelated Machines, Non-Preemptive}
}
Document
Approximating Airports and Railways

Authors: Anna Adamaszek, Antonios Antoniadis, Amit Kumar, and Tobias Mömke

Published in: LIPIcs, Volume 96, 35th Symposium on Theoretical Aspects of Computer Science (STACS 2018)


Abstract
In this paper we consider the airport and railway problem (AR), which combines capacitated facility location with network design, both in the general metric and the two-dimensional Euclidean space. An instance of the airport and railway problem consists of a set of points in the corresponding metric, together with a non-negative weight for each point, and a parameter k. The points represent cities, the weights denote costs of opening an airport in the corresponding city, and the parameter k is a maximum capacity of an airport. The goal is to construct a minimum cost network of airports and railways connecting all the cities, where railways correspond to edges connecting pairs of points, and the cost of a railway is equal to the distance between the corresponding points. The network is partitioned into components, where each component contains an open airport, and spans at most k cities. For the Euclidean case, any points in the plane can be used as Steiner vertices of the network. We obtain the first bicriteria approximation algorithm for AR for the general metric case, which yields a 4-approximate solution with a resource augmentation of the airport capacity k by a factor of 2. More generally, for any parameter 0 < p <= 1 where pk is an integer we develop a (4/3)(2 + 1/p)-approximation algorithm for metric AR with a resource augmentation by a factor of 1 + p. Furthermore, we obtain the first constant factor approximation algorithm that does not resort to resource augmentation for AR in the Euclidean plane. Additionally, for the Euclidean setting we provide a quasi-polynomial time approximation scheme for the same problem with a resource augmentation by a factor of 1 + mu on the airport capacity, for any fixed mu > 0.

Cite as

Anna Adamaszek, Antonios Antoniadis, Amit Kumar, and Tobias Mömke. Approximating Airports and Railways. In 35th Symposium on Theoretical Aspects of Computer Science (STACS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 96, pp. 5:1-5:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{adamaszek_et_al:LIPIcs.STACS.2018.5,
  author =	{Adamaszek, Anna and Antoniadis, Antonios and Kumar, Amit and M\"{o}mke, Tobias},
  title =	{{Approximating Airports and Railways}},
  booktitle =	{35th Symposium on Theoretical Aspects of Computer Science (STACS 2018)},
  pages =	{5:1--5:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-062-0},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{96},
  editor =	{Niedermeier, Rolf and Vall\'{e}e, Brigitte},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2018.5},
  URN =		{urn:nbn:de:0030-drops-85183},
  doi =		{10.4230/LIPIcs.STACS.2018.5},
  annote =	{Keywords: Network Design, Facility Location, Approximation Algorithms, PTAS, Metric, Euclidean}
}
Document
A Local-Search Algorithm for Steiner Forest

Authors: Martin Groß, Anupam Gupta, Amit Kumar, Jannik Matuschke, Daniel R. Schmidt, Melanie Schmidt, and José Verschae

Published in: LIPIcs, Volume 94, 9th Innovations in Theoretical Computer Science Conference (ITCS 2018)


Abstract
In the Steiner Forest problem, we are given a graph and a collection of source-sink pairs, and the goal is to find a subgraph of minimum total length such that all pairs are connected. The problem is APX-Hard and can be 2-approximated by, e.g., the elegant primal-dual algorithm of Agrawal, Klein, and Ravi from 1995. We give a local-search-based constant-factor approximation for the problem. Local search brings in new techniques to an area that has for long not seen any improvements and might be a step towards a combinatorial algorithm for the more general survivable network design problem. Moreover, local search was an essential tool to tackle the dynamic MST/Steiner Tree problem, whereas dynamic Steiner Forest is still wide open. It is easy to see that any constant factor local search algorithm requires steps that add/drop many edges together. We propose natural local moves which, at each step, either (a) add a shortest path in the current graph and then drop a bunch of inessential edges, or (b) add a set of edges to the current solution. This second type of moves is motivated by the potential function we use to measure progress, combining the cost of the solution with a penalty for each connected component. Our carefully-chosen local moves and potential function work in tandem to eliminate bad local minima that arise when using more traditional local moves. Our analysis first considers the case where the local optimum is a single tree, and shows optimality w.r.t. moves that add a single edge (and drop a set of edges) is enough to bound the locality gap. For the general case, we show how to "project" the optimal solution onto the different trees of the local optimum without incurring too much cost (and this argument uses optimality w.r.t. both kinds of moves), followed by a tree-by-tree argument. We hope both the potential function, and our analysis techniques will be useful to develop and analyze local-search algorithms in other contexts.

Cite as

Martin Groß, Anupam Gupta, Amit Kumar, Jannik Matuschke, Daniel R. Schmidt, Melanie Schmidt, and José Verschae. A Local-Search Algorithm for Steiner Forest. In 9th Innovations in Theoretical Computer Science Conference (ITCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 94, pp. 31:1-31:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{gro_et_al:LIPIcs.ITCS.2018.31,
  author =	{Gro{\ss}, Martin and Gupta, Anupam and Kumar, Amit and Matuschke, Jannik and Schmidt, Daniel R. and Schmidt, Melanie and Verschae, Jos\'{e}},
  title =	{{A Local-Search Algorithm for Steiner Forest}},
  booktitle =	{9th Innovations in Theoretical Computer Science Conference (ITCS 2018)},
  pages =	{31:1--31:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-060-6},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{94},
  editor =	{Karlin, Anna R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2018.31},
  URN =		{urn:nbn:de:0030-drops-83134},
  doi =		{10.4230/LIPIcs.ITCS.2018.31},
  annote =	{Keywords: Local Search, Steiner Forest, Approximation Algorithms, Network Design}
}
Document
Approximate Clustering with Same-Cluster Queries

Authors: Nir Ailon, Anup Bhattacharya, Ragesh Jaiswal, and Amit Kumar

Published in: LIPIcs, Volume 94, 9th Innovations in Theoretical Computer Science Conference (ITCS 2018)


Abstract
Ashtiani et al. proposed a Semi-Supervised Active Clustering framework (SSAC), where the learner is allowed to make adaptive queries to a domain expert. The queries are of the kind "do two given points belong to the same optimal cluster?", where the answers to these queries are assumed to be consistent with a unique optimal solution. There are many clustering contexts where such same cluster queries are feasible. Ashtiani et al. exhibited the power of such queries by showing that any instance of the k-means clustering problem, with additional margin assumption, can be solved efficiently if one is allowed to make O(k^2 log{k} + k log{n}) same-cluster queries. This is interesting since the k-means problem, even with the margin assumption, is NP-hard. In this paper, we extend the work of Ashtiani et al. to the approximation setting by showing that a few of such same-cluster queries enables one to get a polynomial-time (1+eps)-approximation algorithm for the k-means problem without any margin assumption on the input dataset. Again, this is interesting since the k-means problem is NP-hard to approximate within a factor (1+c) for a fixed constant 0 < c < 1. The number of same-cluster queries used by the algorithm is poly(k/eps) which is independent of the size n of the dataset. Our algorithm is based on the D^2-sampling technique, also known as the k-means++ seeding algorithm. We also give a conditional lower bound on the number of same-cluster queries showing that if the Exponential Time Hypothesis (ETH) holds, then any such efficient query algorithm needs to make Omega (k/poly log k) same-cluster queries. Our algorithm can be extended for the case where the query answers are wrong with some bounded probability. Another result we show for the k-means++ seeding is that a small modification of the k-means++ seeding within the SSAC framework converts it to a constant factor approximation algorithm instead of the well known O(log k)-approximation algorithm.

Cite as

Nir Ailon, Anup Bhattacharya, Ragesh Jaiswal, and Amit Kumar. Approximate Clustering with Same-Cluster Queries. In 9th Innovations in Theoretical Computer Science Conference (ITCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 94, pp. 40:1-40:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{ailon_et_al:LIPIcs.ITCS.2018.40,
  author =	{Ailon, Nir and Bhattacharya, Anup and Jaiswal, Ragesh and Kumar, Amit},
  title =	{{Approximate Clustering with Same-Cluster Queries}},
  booktitle =	{9th Innovations in Theoretical Computer Science Conference (ITCS 2018)},
  pages =	{40:1--40:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-060-6},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{94},
  editor =	{Karlin, Anna R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2018.40},
  URN =		{urn:nbn:de:0030-drops-83358},
  doi =		{10.4230/LIPIcs.ITCS.2018.40},
  annote =	{Keywords: k-means, semi-supervised learning, query bounds}
}
Document
Faster Algorithms for the Constrained k-Means Problem

Authors: Anup Bhattacharya, Ragesh Jaiswal, and Amit Kumar

Published in: LIPIcs, Volume 47, 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)


Abstract
The classical center based clustering problems such as k-means/median/center assume that the optimal clusters satisfy the locality property that the points in the same cluster are close to each other. A number of clustering problems arise in machine learning where the optimal clusters do not follow such a locality property. For instance, consider the r-gather clustering problem where there is an additional constraint that each of the clusters should have at least r points or the capacitated clustering problem where there is an upper bound on the cluster sizes. Consider a variant of the k-means problem that may be regarded as a general version of such problems. Here, the optimal clusters O_1, ..., O_k are an arbitrary partition of the dataset and the goal is to output k-centers c_1, ..., c_k such that the objective function sum_{i=1}^{k} sum_{x in O_{i}} ||x - c_{i}||^2 is minimized. It is not difficult to argue that any algorithm (without knowing the optimal clusters) that outputs a single set of k centers, will not behave well as far as optimizing the above objective function is concerned. However, this does not rule out the existence of algorithms that output a list of such k centers such that at least one of these k centers behaves well. Given an error parameter epsilon > 0, let l denote the size of the smallest list of k-centers such that at least one of the k-centers gives a (1+epsilon) approximation w.r.t. the objective function above. In this paper, we show an upper bound on l by giving a randomized algorithm that outputs a list of 2^{~O(k/epsilon)} k-centers. We also give a closely matching lower bound of 2^{~Omega(k/sqrt{epsilon})}. Moreover, our algorithm runs in time O(n * d * 2^{~O(k/epsilon)}). This is a significant improvement over the previous result of Ding and Xu who gave an algorithm with running time O(n * d * (log{n})^{k} * 2^{poly(k/epsilon)}) and output a list of size O((log{n})^k * 2^{poly(k/epsilon)}). Our techniques generalize for the k-median problem and for many other settings where non-Euclidean distance measures are involved.

Cite as

Anup Bhattacharya, Ragesh Jaiswal, and Amit Kumar. Faster Algorithms for the Constrained k-Means Problem. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 16:1-16:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{bhattacharya_et_al:LIPIcs.STACS.2016.16,
  author =	{Bhattacharya, Anup and Jaiswal, Ragesh and Kumar, Amit},
  title =	{{Faster Algorithms for the Constrained k-Means Problem}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{16:1--16:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.16},
  URN =		{urn:nbn:de:0030-drops-57179},
  doi =		{10.4230/LIPIcs.STACS.2016.16},
  annote =	{Keywords: k-means, k-median, approximation algorithm, sampling}
}
Document
Minimizing Weighted lp-Norm of Flow-Time in the Rejection Model

Authors: Anamitra Roy Choudhury, Syamantak Das, and Amit Kumar

Published in: LIPIcs, Volume 45, 35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015)


Abstract
We consider the online scheduling problem to minimize the weighted ell_p-norm of flow-time of jobs. We study this problem under the rejection model introduced by Choudhury et al. (SODA 2015) - here the online algorithm is allowed to not serve an eps-fraction of the requests. We consider the restricted assignments setting where each job can go to a specified subset of machines. Our main result is an immediate dispatch non-migratory 1/eps^{O(1)}-competitive algorithm for this problem when one is allowed to reject at most eps-fraction of the total weight of jobs arriving. This is in contrast with the speed augmentation model under which no online algorithm for this problem can achieve a competitive ratio independent of p.

Cite as

Anamitra Roy Choudhury, Syamantak Das, and Amit Kumar. Minimizing Weighted lp-Norm of Flow-Time in the Rejection Model. In 35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 45, pp. 25-37, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{roychoudhury_et_al:LIPIcs.FSTTCS.2015.25,
  author =	{Roy Choudhury, Anamitra and Das, Syamantak and Kumar, Amit},
  title =	{{Minimizing Weighted lp-Norm of Flow-Time in the Rejection Model}},
  booktitle =	{35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015)},
  pages =	{25--37},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-97-2},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{45},
  editor =	{Harsha, Prahladh and Ramalingam, G.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2015.25},
  URN =		{urn:nbn:de:0030-drops-56341},
  doi =		{10.4230/LIPIcs.FSTTCS.2015.25},
  annote =	{Keywords: online scheduling, flow-time, competitive analysis}
}
Document
Complete Volume
LIPIcs, Volume 13, FSTTCS'11, Complete Volume

Authors: Supratik Chakraborty and Amit Kumar

Published in: LIPIcs, Volume 13, IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)


Abstract
LIPIcs, Volume 13, FSTTCS'11, Complete Volume

Cite as

IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@Proceedings{chakraborty_et_al:LIPIcs.FSTTCS.2011,
  title =	{{LIPIcs, Volume 13, FSTTCS'11, Complete Volume}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2013},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011},
  URN =		{urn:nbn:de:0030-drops-41077},
  doi =		{10.4230/LIPIcs.FSTTCS.2011},
  annote =	{Keywords: Software/Program Verification, Models of Computation, Modes of Computation, Complexity Measures and Classes, Nonnumerical Algorithms and Problems Specifying and Verifying and Reasoning about Programs}
}
Document
Efficient on-line algorithm for maintaining k-cover of sparse bit-strings

Authors: Amit Kumar, Preeti R. Panda, and Smruti Sarangi

Published in: LIPIcs, Volume 18, IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)


Abstract
We consider the on-line problem of representing a sparse bit string by a set of k intervals, where k is much smaller than the length of the string. The goal is to minimize the total length of these intervals under the condition that each 1-bit must be in one of these intervals. We give an efficient greedy algorithm which takes time O(log k) per update (an update involves converting a 0-bit to a 1-bit), which is independent of the size of the entire string. We prove that this greedy algorithm is 2-competitive. We use a natural linear programming relaxation for this problem, and analyze the algorithm by finding a dual feasible solution whose value matches the cost of the greedy algorithm.

Cite as

Amit Kumar, Preeti R. Panda, and Smruti Sarangi. Efficient on-line algorithm for maintaining k-cover of sparse bit-strings. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 249-256, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{kumar_et_al:LIPIcs.FSTTCS.2012.249,
  author =	{Kumar, Amit and Panda, Preeti R. and Sarangi, Smruti},
  title =	{{Efficient on-line algorithm for maintaining k-cover of sparse bit-strings}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{249--256},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.249},
  URN =		{urn:nbn:de:0030-drops-38639},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.249},
  annote =	{Keywords: On-line algorithms, string algorithms}
}
Document
Approximation Algorithms for the Unsplittable Flow Problem on Paths and Trees

Authors: Khaled Elbassioni, Naveen Garg, Divya Gupta, Amit Kumar, Vishal Narula, and Arindam Pal

Published in: LIPIcs, Volume 18, IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)


Abstract
We study the Unsplittable Flow Problem (UFP) and related variants, namely UFP with Bag Constraints and UFP with Rounds, on paths and trees. We provide improved constant factor approximation algorithms for all these problems under the no bottleneck assumption (NBA), which says that the maximum demand for any source-sink pair is at most the minimum capacity of any edge. We obtain these improved results by expressing a feasible solution to a natural LP relaxation of the UFP as a near-convex combination of feasible integral solutions.

Cite as

Khaled Elbassioni, Naveen Garg, Divya Gupta, Amit Kumar, Vishal Narula, and Arindam Pal. Approximation Algorithms for the Unsplittable Flow Problem on Paths and Trees. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012). Leibniz International Proceedings in Informatics (LIPIcs), Volume 18, pp. 267-275, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2012)


Copy BibTex To Clipboard

@InProceedings{elbassioni_et_al:LIPIcs.FSTTCS.2012.267,
  author =	{Elbassioni, Khaled and Garg, Naveen and Gupta, Divya and Kumar, Amit and Narula, Vishal and Pal, Arindam},
  title =	{{Approximation Algorithms for the Unsplittable Flow Problem on Paths and Trees}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2012)},
  pages =	{267--275},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-47-7},
  ISSN =	{1868-8969},
  year =	{2012},
  volume =	{18},
  editor =	{D'Souza, Deepak and Radhakrishnan, Jaikumar and Telikepalli, Kavitha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2012.267},
  URN =		{urn:nbn:de:0030-drops-38650},
  doi =		{10.4230/LIPIcs.FSTTCS.2012.267},
  annote =	{Keywords: Approximation Algorithms, Integer Decomposition, Linear Programming, Scheduling, Unsplittable Flows}
}
Document
Front Matter
Frontmatter, Table of Contents, Preface, Conference Organization, External Reviewers

Authors: Supratik Chakraborty and Amit Kumar

Published in: LIPIcs, Volume 13, IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)


Abstract
Frontmatter, Table of Contents, Preface, Conference Organization, External Reviewers

Cite as

IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. i-xv, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{chakraborty_et_al:LIPIcs.FSTTCS.2011.i,
  author =	{Chakraborty, Supratik and Kumar, Amit},
  title =	{{Frontmatter, Table of Contents, Preface, Conference Organization, External Reviewers}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{i--xv},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.i},
  URN =		{urn:nbn:de:0030-drops-33196},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.i},
  annote =	{Keywords: Frontmatter, Table of Contents, Preface, Conference Organization, External Reviewers}
}
Document
Author Index

Authors: Supratik Chakraborty and Amit Kumar

Published in: LIPIcs, Volume 13, IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)


Abstract
Author Index

Cite as

Supratik Chakraborty and Amit Kumar. Author Index. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 13, pp. 16-17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{chakraborty_et_al:LIPIcs.FSTTCS.2011.xvi,
  author =	{Chakraborty, Supratik and Kumar, Amit},
  title =	{{Author Index}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2011)},
  pages =	{16--17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-34-7},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{13},
  editor =	{Chakraborty, Supratik and Kumar, Amit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2011.xvi},
  URN =		{urn:nbn:de:0030-drops-33636},
  doi =		{10.4230/LIPIcs.FSTTCS.2011.xvi},
  annote =	{Keywords: Author Index}
}
Document
All-Norms and All-L_p-Norms Approximation Algorithms

Authors: Daniel Golovin, Anupam Gupta, Amit Kumar, and Kanat Tangwongsan

Published in: LIPIcs, Volume 2, IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (2008)


Abstract
In many optimization problems, a solution can be viewed as ascribing a ``cost\'\' to each client, and the goal is to optimize some aggregation of the per-client costs. We often optimize some $L_p$-norm (or some other symmetric convex function or norm) of the vector of costs---though different applications may suggest different norms to use. Ideally, we could obtain a solution that optimizes several norms simultaneously. In this paper, we examine approximation algorithms that simultaneously perform well on all norms, or on all $L_p$ norms. A natural problem in this framework is the $L_p$ Set Cover problem, which generalizes \textsc{Set Cover} and \textsc{Min-Sum Set Cover}. We show that the greedy algorithm \emph{simultaneously gives a $(p + \ln p + O(1))$-approximation for all $p$, and show that this approximation ratio is optimal up to constants} under reasonable complexity-theoretic assumptions. We additionally show how to use our analysis techniques to give similar results for the more general \emph{submodular set cover}, and prove some results for the so-called \emph{pipelined set cover} problem. We then go on to examine approximation algorithms in the ``all-norms\'\' and the ``all-$L_p$-norms\'\' frameworks more broadly, and present algorithms and structural results for other problems such as $k$-facility-location, TSP, and average flow-time minimization, extending and unifying previously known results.

Cite as

Daniel Golovin, Anupam Gupta, Amit Kumar, and Kanat Tangwongsan. All-Norms and All-L_p-Norms Approximation Algorithms. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 2, pp. 199-210, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{golovin_et_al:LIPIcs.FSTTCS.2008.1753,
  author =	{Golovin, Daniel and Gupta, Anupam and Kumar, Amit and Tangwongsan, Kanat},
  title =	{{All-Norms and All-L\underlinep-Norms Approximation Algorithms}},
  booktitle =	{IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science},
  pages =	{199--210},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-08-8},
  ISSN =	{1868-8969},
  year =	{2008},
  volume =	{2},
  editor =	{Hariharan, Ramesh and Mukund, Madhavan and Vinay, V},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2008.1753},
  URN =		{urn:nbn:de:0030-drops-17537},
  doi =		{10.4230/LIPIcs.FSTTCS.2008.1753},
  annote =	{Keywords: Approximation algorithms, set-cover problems, combinatorial optimization, sampling minkowski norms}
}

Kumar, Mithilesh

Document
Lossy Kernels for Connected Dominating Set on Sparse Graphs

Authors: Eduard Eiben, Mithilesh Kumar, Amer E. Mouawad, Fahad Panolan, and Sebastian Siebertz

Published in: LIPIcs, Volume 96, 35th Symposium on Theoretical Aspects of Computer Science (STACS 2018)


Abstract
For alpha > 1, an alpha-approximate (bi-)kernel for a problem Q is a polynomial-time algorithm that takes as input an instance (I, k) of Q and outputs an instance (I',k') (of a problem Q') of size bounded by a function of k such that, for every c >= 1, a c-approximate solution for the new instance can be turned into a (c alpha)-approximate solution of the original instance in polynomial time. This framework of lossy kernelization was recently introduced by Lokshtanov et al. We study Connected Dominating Set (and its distance-r variant) parameterized by solution size on sparse graph classes like biclique-free graphs, classes of bounded expansion, and nowhere dense classes. We prove that for every alpha > 1, Connected Dominating Set admits a polynomial-size alpha-approximate (bi-)kernel on all the aforementioned classes. Our results are in sharp contrast to the kernelization complexity of Connected Dominating Set, which is known to not admit a polynomial kernel even on 2-degenerate graphs and graphs of bounded expansion, unless NP \subseteq coNP/poly. We complement our results by the following conditional lower bound. We show that if a class C is somewhere dense and closed under taking subgraphs, then for some value of r \in N there cannot exist an alpha-approximate bi-kernel for the (Connected) Distance-r Dominating Set problem on C for any alpha > 1 (assuming the Gap Exponential Time Hypothesis).

Cite as

Eduard Eiben, Mithilesh Kumar, Amer E. Mouawad, Fahad Panolan, and Sebastian Siebertz. Lossy Kernels for Connected Dominating Set on Sparse Graphs. In 35th Symposium on Theoretical Aspects of Computer Science (STACS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 96, pp. 29:1-29:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{eiben_et_al:LIPIcs.STACS.2018.29,
  author =	{Eiben, Eduard and Kumar, Mithilesh and Mouawad, Amer E. and Panolan, Fahad and Siebertz, Sebastian},
  title =	{{Lossy Kernels for Connected Dominating Set on Sparse Graphs}},
  booktitle =	{35th Symposium on Theoretical Aspects of Computer Science (STACS 2018)},
  pages =	{29:1--29:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-062-0},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{96},
  editor =	{Niedermeier, Rolf and Vall\'{e}e, Brigitte},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2018.29},
  URN =		{urn:nbn:de:0030-drops-85027},
  doi =		{10.4230/LIPIcs.STACS.2018.29},
  annote =	{Keywords: Lossy Kernelization, Connected Dominating Set, Sparse Graph Classes}
}
Document
A 2lk Kernel for l-Component Order Connectivity

Authors: Mithilesh Kumar and Daniel Lokshtanov

Published in: LIPIcs, Volume 63, 11th International Symposium on Parameterized and Exact Computation (IPEC 2016)


Abstract
In the l-Component Order Connectivity problem (l in N), we are given a graph G on n vertices, m edges and a non-negative integer k and asks whether there exists a set of vertices S subseteq V(G) such that |S| <= k and the size of the largest connected component in G-S is at most l. In this paper, we give a kernel for l-Component Order Connectivity with at most 2*l*k vertices that takes n^{O(l)} time for every constant l. On the way to obtaining our kernel, we prove a generalization of the q-Expansion Lemma to weighted graphs. This generalization may be of independent interest.

Cite as

Mithilesh Kumar and Daniel Lokshtanov. A 2lk Kernel for l-Component Order Connectivity. In 11th International Symposium on Parameterized and Exact Computation (IPEC 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 63, pp. 20:1-20:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{kumar_et_al:LIPIcs.IPEC.2016.20,
  author =	{Kumar, Mithilesh and Lokshtanov, Daniel},
  title =	{{A 2lk Kernel for l-Component Order Connectivity}},
  booktitle =	{11th International Symposium on Parameterized and Exact Computation (IPEC 2016)},
  pages =	{20:1--20:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-023-1},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{63},
  editor =	{Guo, Jiong and Hermelin, Danny},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2016.20},
  URN =		{urn:nbn:de:0030-drops-69345},
  doi =		{10.4230/LIPIcs.IPEC.2016.20},
  annote =	{Keywords: Parameterized algorithms, Kernel, Component Order Connectivity, Max-min allocation, Weighted expansion}
}
Document
Faster Exact and Parameterized Algorithm for Feedback Vertex Set in Bipartite Tournaments

Authors: Mithilesh Kumar and Daniel Lokshtanov

Published in: LIPIcs, Volume 65, 36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016)


Abstract
A bipartite tournament is a directed graph T:=(A cup B, E) such that every pair of vertices (a,b), a in A, b in B are connected by an arc, and no arc connects two vertices of A or two vertices of B. A feedback vertex set is a set S of vertices in T such that T - S is acyclic. In this article we consider the Feedback Vertex Set problem in bipartite tournaments. Here the input is a bipartite tournament T on n vertices together with an integer k, and the task is to determine whether T has a feedback vertex set of size at most k. We give a new algorithm for Feedback Vertex Set in Bipartite Tournaments. The running time of our algorithm is upper-bounded by O(1.6181^k + n^{O(1)}), improving over the previously best known algorithm with running time (2^k)k^{O(1)} + n^{O(1)} [Hsiao, ISAAC 2011]. As a by-product, we also obtain the fastest currently known exact exponential-time algorithm for the problem, with running time O(1.3820^n).

Cite as

Mithilesh Kumar and Daniel Lokshtanov. Faster Exact and Parameterized Algorithm for Feedback Vertex Set in Bipartite Tournaments. In 36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 65, pp. 24:1-24:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kumar_et_al:LIPIcs.FSTTCS.2016.24,
  author =	{Kumar, Mithilesh and Lokshtanov, Daniel},
  title =	{{Faster Exact and Parameterized Algorithm for Feedback Vertex Set in Bipartite Tournaments}},
  booktitle =	{36th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2016)},
  pages =	{24:1--24:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-027-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{65},
  editor =	{Lal, Akash and Akshay, S. and Saurabh, Saket and Sen, Sandeep},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2016.24},
  URN =		{urn:nbn:de:0030-drops-68591},
  doi =		{10.4230/LIPIcs.FSTTCS.2016.24},
  annote =	{Keywords: Parameterized algorithms, Exact algorithms, Feedback vertex set, Tour- naments, Bipartite tournaments}
}
Document
Faster Exact and Parameterized Algorithm for Feedback Vertex Set in Tournaments

Authors: Mithilesh Kumar and Daniel Lokshtanov

Published in: LIPIcs, Volume 47, 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)


Abstract
A tournament is a directed graph T such that every pair of vertices is connected by an arc. A feedback vertex set is a set S of vertices in T such that T\S is acyclic. In this article we consider the FEEDBACK VERTEX SET problem in tournaments. Here the input is a tournament T and an integer k, and the task is to determine whether T has a feedback vertex set of size at most k. We give a new algorithm for FEEDBACK VERTEX SET IN TOURNAMENTS. The running time of our algorithm is upper-bounded by O(1.6181^k + n^{O(1)}) and by O(1.466^n). Thus our algorithm simultaneously improves over the fastest known parameterized algorithm for the problem by Dom et al. running in time O(2^kk^{O(1)} + n^{O(1)}), and the fastest known exact exponential-time algorithm by Gaspers and Mnich with running time O(1.674^n). On the way to proving our main result we prove a strengthening of a special case of a graph partitioning theorem due to Bollobas and Scott. In particular we show that the vertices of any undirected m-edge graph of maximum degree d can be colored white or black in such a way that for each of the two colors, the number of edges with both endpoints of that color is between m/4-d/2 and m/4+d/2.

Cite as

Mithilesh Kumar and Daniel Lokshtanov. Faster Exact and Parameterized Algorithm for Feedback Vertex Set in Tournaments. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 49:1-49:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kumar_et_al:LIPIcs.STACS.2016.49,
  author =	{Kumar, Mithilesh and Lokshtanov, Daniel},
  title =	{{Faster Exact and Parameterized Algorithm for Feedback Vertex Set in Tournaments}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{49:1--49:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.49},
  URN =		{urn:nbn:de:0030-drops-57501},
  doi =		{10.4230/LIPIcs.STACS.2016.49},
  annote =	{Keywords: Parameterized algorithms, Exact algorithms, Feedback vertex set, Tour- naments, Graph partitions}
}

Kumar, Srivatsan

Document
Low Rank Approximation in the Presence of Outliers

Authors: Aditya Bhaskara and Srivatsan Kumar

Published in: LIPIcs, Volume 116, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)


Abstract
We consider the problem of principal component analysis (PCA) in the presence of outliers. Given a matrix A (d x n) and parameters k, m, the goal is to remove a set of at most m columns of A (outliers), so as to minimize the rank-k approximation error of the remaining matrix (inliers). While much of the work on this problem has focused on recovery of the rank-k subspace under assumptions on the inliers and outliers, we focus on the approximation problem. Our main result shows that sampling-based methods developed in the outlier-free case give non-trivial guarantees even in the presence of outliers. Using this insight, we develop a simple algorithm that has bi-criteria guarantees. Further, unlike similar formulations for clustering, we show that bi-criteria guarantees are unavoidable for the problem, under appropriate complexity assumptions.

Cite as

Aditya Bhaskara and Srivatsan Kumar. Low Rank Approximation in the Presence of Outliers. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 116, pp. 4:1-4:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{bhaskara_et_al:LIPIcs.APPROX-RANDOM.2018.4,
  author =	{Bhaskara, Aditya and Kumar, Srivatsan},
  title =	{{Low Rank Approximation in the Presence of Outliers}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2018)},
  pages =	{4:1--4:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-085-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{116},
  editor =	{Blais, Eric and Jansen, Klaus and D. P. Rolim, Jos\'{e} and Steurer, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2018.4},
  URN =		{urn:nbn:de:0030-drops-94087},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2018.4},
  annote =	{Keywords: Low rank approximation, PCA, Robustness to outliers}
}

Kumar, Anant

Document
Finding and Counting Patterns in Sparse Graphs

Authors: Balagopal Komarath, Anant Kumar, Suchismita Mishra, and Aditi Sethia

Published in: LIPIcs, Volume 254, 40th International Symposium on Theoretical Aspects of Computer Science (STACS 2023)


Abstract
We consider algorithms for finding and counting small, fixed graphs in sparse host graphs. In the non-sparse setting, the parameters treedepth and treewidth play a crucial role in fast, constant-space and polynomial-space algorithms respectively. We discover two new parameters that we call matched treedepth and matched treewidth. We show that finding and counting patterns with low matched treedepth and low matched treewidth can be done asymptotically faster than the existing algorithms when the host graphs are sparse for many patterns. As an application to finding and counting fixed-size patterns, we discover Õ(m³)-time, constant-space algorithms for cycles of length at most 11 and Õ(m²)-time, polynomial-space algorithms for paths of length at most 10.

Cite as

Balagopal Komarath, Anant Kumar, Suchismita Mishra, and Aditi Sethia. Finding and Counting Patterns in Sparse Graphs. In 40th International Symposium on Theoretical Aspects of Computer Science (STACS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 254, pp. 40:1-40:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{komarath_et_al:LIPIcs.STACS.2023.40,
  author =	{Komarath, Balagopal and Kumar, Anant and Mishra, Suchismita and Sethia, Aditi},
  title =	{{Finding and Counting Patterns in Sparse Graphs}},
  booktitle =	{40th International Symposium on Theoretical Aspects of Computer Science (STACS 2023)},
  pages =	{40:1--40:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-266-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{254},
  editor =	{Berenbrink, Petra and Bouyer, Patricia and Dawar, Anuj and Kant\'{e}, Mamadou Moustapha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2023.40},
  URN =		{urn:nbn:de:0030-drops-176921},
  doi =		{10.4230/LIPIcs.STACS.2023.40},
  annote =	{Keywords: Subgraph Detection and Counting, Homomorphism Polynomials, Treewidth and Treedepth, Matchings}
}
Document
Linear Space Data Structures for Finite Groups with Constant Query-Time

Authors: Bireswar Das, Anant Kumar, Shivdutt Sharma, and Dhara Thakkar

Published in: LIPIcs, Volume 219, 39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022)


Abstract
A finite group of order n can be represented by its Cayley table. In the word-RAM model the Cayley table of a group of order n can be stored using O(n²) words and can be used to answer a multiplication query in constant time. It is interesting to ask if we can design a data structure to store a group of order n that uses o(n²) space but can still answer a multiplication query in constant time. We design a constant query-time data structure that can store any finite group using O(n) words where n is the order of the group. Farzan and Munro (ISSAC 2006) gave an information theoretic lower bound of Ω(n) on the number of words to store a group of order n. Since our data structure achieves this lower bound and answers queries in constant time, it is optimal in both space usage and query-time. A crucial step in the process is essentially to design linear space and constant query-time data structures for nonabelian simple groups. The data structures for nonableian simple groups are designed using a lemma that we prove using the Classification Theorem for Finite Simple Groups (CFSG).

Cite as

Bireswar Das, Anant Kumar, Shivdutt Sharma, and Dhara Thakkar. Linear Space Data Structures for Finite Groups with Constant Query-Time. In 39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 219, pp. 25:1-25:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{das_et_al:LIPIcs.STACS.2022.25,
  author =	{Das, Bireswar and Kumar, Anant and Sharma, Shivdutt and Thakkar, Dhara},
  title =	{{Linear Space Data Structures for Finite Groups with Constant Query-Time}},
  booktitle =	{39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022)},
  pages =	{25:1--25:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-222-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{219},
  editor =	{Berenbrink, Petra and Monmege, Benjamin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2022.25},
  URN =		{urn:nbn:de:0030-drops-158350},
  doi =		{10.4230/LIPIcs.STACS.2022.25},
  annote =	{Keywords: Compact Data Structures, Space Efficient Representations, Finite Groups, Simple Groups, Classification Theorem for Finite Simple Groups}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail