17 Search Results for "Kumar, Rohit"


Document
On Finding 𝓁-Th Smallest Perfect Matchings

Authors: Nicolas El Maalouly, Sebastian Haslebacher, Adrian Taubner, and Lasse Wulf

Published in: LIPIcs, Volume 351, 33rd Annual European Symposium on Algorithms (ESA 2025)


Abstract
Given an undirected weighted graph G and an integer k, Exact-Weight Perfect Matching (EWPM) is the problem of finding a perfect matching of weight exactly k in G. In this paper, we study EWPM and its variants. The EWPM problem is famous, since in the case of unary encoded weights, Mulmuley, Vazirani, and Vazirani showed almost 40 years ago that the problem can be solved in randomized polynomial time. However, up to this date no derandomization is known. Our first result is a simple deterministic algorithm for EWPM that runs in time n^𝒪(𝓁), where 𝓁 is the number of distinct weights that perfect matchings in G can take. In fact, we show how to find an 𝓁-th smallest perfect matching in any weighted graph (even if the weights are encoded in binary, in which case EWPM in general is known to be NP-complete) in time n^𝒪(𝓁) for any integer 𝓁. Similar next-to-optimal variants have also been studied recently for the shortest path problem. For our second result, we extend the list of problems that are known to be equivalent to EWPM. We show that EWPM is equivalent under a weight-preserving reduction to the Exact Cycle Sum problem (ECS) in undirected graphs with a conservative (i.e. no negative cycles) weight function. To the best of our knowledge, we are the first to study this problem. As a consequence, the latter problem is contained in RP if the weights are encoded in unary. Finally, we identify a special case of EWPM, called BCPM, which was recently studied by El Maalouly, Steiner and Wulf. We show that BCPM is equivalent under a weight-preserving transformation to another problem recently studied by Schlotter and Sebő as well as Geelen and Kapadia: the Shortest Odd Cycle problem (SOC) in undirected graphs with conservative weights. Finally, our n^𝒪(𝓁) algorithm works in this setting as well, identifying a tractable special case of SOC, BCPM, and ECS.

Cite as

Nicolas El Maalouly, Sebastian Haslebacher, Adrian Taubner, and Lasse Wulf. On Finding 𝓁-Th Smallest Perfect Matchings. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 19:1-19:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{elmaalouly_et_al:LIPIcs.ESA.2025.19,
  author =	{El Maalouly, Nicolas and Haslebacher, Sebastian and Taubner, Adrian and Wulf, Lasse},
  title =	{{On Finding 𝓁-Th Smallest Perfect Matchings}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{19:1--19:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.19},
  URN =		{urn:nbn:de:0030-drops-244875},
  doi =		{10.4230/LIPIcs.ESA.2025.19},
  annote =	{Keywords: Exact Matching, Perfect Matching, Exact-Weight Perfect Matching, Shortest Odd Cycle, Exact Cycle Sum, l-th Smallest Solution, l-th Largest Solution, k-th Best Solution, Derandomization}
}
Document
Tight Guarantees for Cut-Relative Survivable Network Design via a Decomposition Technique

Authors: Nikhil Kumar, J. J. Nan, and Chaitanya Swamy

Published in: LIPIcs, Volume 351, 33rd Annual European Symposium on Algorithms (ESA 2025)


Abstract
In the classical survivable-network-design problem (SNDP), we are given an undirected graph G = (V, E), non-negative edge costs, and some k tuples (s_i,t_i,r_i), where s_i,t_i ∈ V and r_i ∈ ℤ_+. The objective is to find a minimum-cost subset H ⊆ E such that each s_i-t_i pair remains connected even after the failure of any r_i-1 edges. It is well-known that SNDP can be equivalently modeled using a weakly-supermodular cut-requirement function f, where the objective is to find the minimum-cost subset of edges that picks at least f(S) edges across every cut S ⊆ V. Recently, motivated by fault-tolerance in graph spanners, Dinitz, Koranteng, and Kortsartz proposed a variant of SNDP that enforces a relative level of fault tolerance with respect to G. Even if a feasible SNDP-solution may not exist due to G lacking the required fault-tolerance, the goal is to find a solution H that is at least as fault-tolerant as G itself. They formalize the latter condition in terms of paths and fault-sets, which gives rise to path-relative SNDP (which they call relative SNDP). Along these lines, we introduce a new model of relative network design, called cut-relative SNDP (CR-SNDP), where the goal is to select a minimum-cost subset of edges that satisfies the given (weakly-supermodular) cut-requirement function to the maximum extent possible, i.e., by picking min{f(S), |δ_G(S)|} edges across every cut S ⊆ V. Unlike SNDP, the cut-relative and path-relative versions of SNDP are not equivalent. The resulting cut-requirement function for CR-SNDP (as also path-relative SNDP) is not weakly supermodular, and extreme-point solutions to the natural LP-relaxation need not correspond to a laminar family of tight cut constraints. Consequently, standard techniques cannot be used directly to design approximation algorithms for this problem. We develop a novel decomposition technique to circumvent this difficulty and use it to give a tight 2-approximation algorithm for CR-SNDP. We also show some new hardness results for these relative-SNDP problems.

Cite as

Nikhil Kumar, J. J. Nan, and Chaitanya Swamy. Tight Guarantees for Cut-Relative Survivable Network Design via a Decomposition Technique. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 38:1-38:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{kumar_et_al:LIPIcs.ESA.2025.38,
  author =	{Kumar, Nikhil and Nan, J. J. and Swamy, Chaitanya},
  title =	{{Tight Guarantees for Cut-Relative Survivable Network Design via a Decomposition Technique}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{38:1--38:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.38},
  URN =		{urn:nbn:de:0030-drops-245061},
  doi =		{10.4230/LIPIcs.ESA.2025.38},
  annote =	{Keywords: Approximation algorithms, Network Design, Cut-requirement functions, Weak Supermodularity, Iterative rounding, LP rounding algorithms}
}
Document
RAGent: A Self-Learning RAG Agent for Adaptive Data Science Education

Authors: Mariia Vetluzhskikh and Fardina Fathmiul Alam

Published in: OASIcs, Volume 133, 6th International Computer Programming Education Conference (ICPEC 2025)


Abstract
Undergraduate data science education faces a scalability challenge: addressing a high volume of diverse student questions stemming from varying levels of prior knowledge, technical skills, and learning styles - while ensuring timely and accurate responses. Traditional solutions like manual replies or generic chatbots often fall short in terms of contextual relevance, speed, and efficiency. To tackle this, we introduce RAGent, a Retrieval-Augmented Generation (RAG) agent tailored for a university-level data science course at the University of Maryland. RAGent integrates course-specific materials - lecture notes, assignments, and syllabi - to deliver fast, context-aware answers while maintaining low computational overhead. A central innovation of RAGent is its query classification system, which categorizes student questions into: (i) directly answerable, (ii) relevant but unresolved (requiring instructor input), and (iii) irrelevant or out-of-scope. This system uses semantic similarity, keyword relevance, and dynamic thresholds to drive a targeted prompting strategy, enhancing response accuracy. Another key feature is RAGent’s self-learning loop, which continuously improves performance by integrating resolved queries into its knowledge base and flagging unresolved ones for review and retraining. This dual mechanism ensures both immediate adaptability and long-term scalability. We evaluate RAGent using standard NLP metrics (accuracy, precision, recall, F1-score) and report strong performance in filtering and answering student queries. In a user study with 125 students, over 94% expressed a desire to keep RAGent in the course, citing improved clarity and helpfulness. These results suggest that RAGent significantly enhances support in data science education by providing accurate, contextual responses and reducing instructor workload - offering a scalable, adaptive alternative to conventional support methods. Future work will explore deployment across additional courses and institutions to further validate the RAGent’s adaptability.

Cite as

Mariia Vetluzhskikh and Fardina Fathmiul Alam. RAGent: A Self-Learning RAG Agent for Adaptive Data Science Education. In 6th International Computer Programming Education Conference (ICPEC 2025). Open Access Series in Informatics (OASIcs), Volume 133, pp. 8:1-8:10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{vetluzhskikh_et_al:OASIcs.ICPEC.2025.8,
  author =	{Vetluzhskikh, Mariia and Alam, Fardina Fathmiul},
  title =	{{RAGent: A Self-Learning RAG Agent for Adaptive Data Science Education}},
  booktitle =	{6th International Computer Programming Education Conference (ICPEC 2025)},
  pages =	{8:1--8:10},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-393-5},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{133},
  editor =	{Queir\'{o}s, Ricardo and Pinto, M\'{a}rio and Portela, Filipe and Sim\~{o}es, Alberto},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICPEC.2025.8},
  URN =		{urn:nbn:de:0030-drops-240387},
  doi =		{10.4230/OASIcs.ICPEC.2025.8},
  annote =	{Keywords: RAG, Agent, Chatbot, Data Science, Education, Query Classification, Information Retrieval, LLM}
}
Document
APPROX
Covering Simple Orthogonal Polygons with Rectangles

Authors: Aniket Basu Roy

Published in: LIPIcs, Volume 353, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2025)


Abstract
We study the problem of Covering Orthogonal Polygons with Rectangles, focusing on three variants: covering the interior, the boundary, and the corners. While previous work provided constant-factor approximation algorithms for these problems, significant improvements had not been achieved for over two decades. The main contribution of this work is the development of a Polynomial Time Approximation Scheme (PTAS) for both the Boundary Cover and Corner Cover problems on simple polygons, using a local search algorithm. Our work advances the state of the art, improving upon the previous best-known 4-approximation for the Boundary Cover and 2-approximation for the Corner Cover problems. The technical core of our work lies in proving the existence of planar support graphs for certain geometric hypergraphs defined by the polygon and its containment-maximal rectangles. This structural insight enables the application of the local search framework to achieve the PTAS results. We also demonstrate the limitations of this approach by constructing instances where local search fails for the Interior Cover and certain dual problems, such as the Maximum Antirectangle and Hitting Set problems. Additionally, the methods yield a PTAS for a special case of the Discrete Independent Set problem for rectangles. These results not only settle longstanding open questions but also introduce new techniques that may be of independent interest within computational geometry.

Cite as

Aniket Basu Roy. Covering Simple Orthogonal Polygons with Rectangles. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 353, pp. 2:1-2:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{basuroy:LIPIcs.APPROX/RANDOM.2025.2,
  author =	{Basu Roy, Aniket},
  title =	{{Covering Simple Orthogonal Polygons with Rectangles}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2025)},
  pages =	{2:1--2:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-397-3},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{353},
  editor =	{Ene, Alina and Chattopadhyay, Eshan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2025.2},
  URN =		{urn:nbn:de:0030-drops-243686},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2025.2},
  annote =	{Keywords: Polygon Covering, Approximation Algorithms, Orthogonal Polygons, Rectangles, Local Search, Planar Supports}
}
Document
APPROX
Improved FPT Approximation for Sum of Radii Clustering with Mergeable Constraints

Authors: Sayan Bandyapadhyay and Tianzhi Chen

Published in: LIPIcs, Volume 353, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2025)


Abstract
In this work, we study k-min-sum-of-radii (k-MSR) clustering under mergeable constraints. k-MSR seeks to group data points using a set of up to k balls, such that the sum of the radii of the balls is minimized. A clustering constraint is called mergeable if merging two clusters satisfying the constraint, results in a cluster that also satisfies the constraint. Many popularly studied constraints are mergeable, including fairness constraints and lower bound constraints. In our work, we design a (4+ε)-approximation for k-MSR under any given mergeable constraint with runtime 2^{O(k/(ε)⋅log²k/ε)} n⁴, i.e., fixed-parameter tractable in k for constant ε. Our result directly improves upon the FPT (6+ε)-approximation by Carta et al. [Carta et al., 2024]. We also provide a hardness result that excludes the exact solvability of k-MSR under any given mergeable constraint in time f(k)n^o(k), assuming ETH is true.

Cite as

Sayan Bandyapadhyay and Tianzhi Chen. Improved FPT Approximation for Sum of Radii Clustering with Mergeable Constraints. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 353, pp. 23:1-23:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bandyapadhyay_et_al:LIPIcs.APPROX/RANDOM.2025.23,
  author =	{Bandyapadhyay, Sayan and Chen, Tianzhi},
  title =	{{Improved FPT Approximation for Sum of Radii Clustering with Mergeable Constraints}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2025)},
  pages =	{23:1--23:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-397-3},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{353},
  editor =	{Ene, Alina and Chattopadhyay, Eshan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2025.23},
  URN =		{urn:nbn:de:0030-drops-243894},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2025.23},
  annote =	{Keywords: sum-of-radii clustering, mergeable constraints, approximation algorithm}
}
Document
Combining Generalization Algorithms in Regular Collapse-Free Theories

Authors: Mauricio Ayala-Rincón, David M. Cerna, Temur Kutsia, and Christophe Ringeissen

Published in: LIPIcs, Volume 337, 10th International Conference on Formal Structures for Computation and Deduction (FSCD 2025)


Abstract
We look at the generalization problem modulo some equational theories. This problem is dual to the unification problem: given two input terms, we want to find a common term whose respective two instances are equivalent to the original terms modulo the theory. There exist algorithms for finding generalizations over various equational theories. We focus on modular construction of equational generalization algorithms for the union of signature-disjoint theories. Specifically, we consider the class of regular and collapse-free theories, showing how to combine existing generalization algorithms to produce specific solutions in these cases. Additionally, we identify a class of theories that admit a generalization algorithm based on the application of axioms to resolve the problem. To define this class, we rely on the notion of syntactic theories, a concept originally introduced to develop unification procedures similar to the one known for syntactic unification. We demonstrate that syntactic theories are also helpful in developing generalization procedures similar to those used for syntactic generalization.

Cite as

Mauricio Ayala-Rincón, David M. Cerna, Temur Kutsia, and Christophe Ringeissen. Combining Generalization Algorithms in Regular Collapse-Free Theories. In 10th International Conference on Formal Structures for Computation and Deduction (FSCD 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 337, pp. 7:1-7:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{ayalarincon_et_al:LIPIcs.FSCD.2025.7,
  author =	{Ayala-Rinc\'{o}n, Mauricio and Cerna, David M. and Kutsia, Temur and Ringeissen, Christophe},
  title =	{{Combining Generalization Algorithms in Regular Collapse-Free Theories}},
  booktitle =	{10th International Conference on Formal Structures for Computation and Deduction (FSCD 2025)},
  pages =	{7:1--7:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-374-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{337},
  editor =	{Fern\'{a}ndez, Maribel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSCD.2025.7},
  URN =		{urn:nbn:de:0030-drops-236228},
  doi =		{10.4230/LIPIcs.FSCD.2025.7},
  annote =	{Keywords: Generalization, Anti-unification, Equational theories, Combination}
}
Document
Track A: Algorithms, Complexity and Games
The Long Arm of Nashian Allocation in Online p-Mean Welfare Maximization

Authors: Zhiyi Huang, Chui Shan Lee, Xinkai Shu, and Zhaozi Wang

Published in: LIPIcs, Volume 334, 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)


Abstract
We study the online allocation of divisible items to n agents with additive valuations for p-mean welfare maximization, a problem introduced by Barman, Khan, and Maiti (2022). Our algorithmic and hardness results characterize the optimal competitive ratios for the entire spectrum of -∞ ≤ p ≤ 1. Surprisingly, our improved algorithms for all p ≤ (1)/(log n) are simply the greedy algorithm for the Nash welfare, supplemented with two auxiliary components to ensure all agents have non-zero utilities and to help a small number of agents with low utilities. In this sense, the long arm of Nashian allocation achieves near-optimal competitive ratios not only for Nash welfare but also all the way to egalitarian welfare.

Cite as

Zhiyi Huang, Chui Shan Lee, Xinkai Shu, and Zhaozi Wang. The Long Arm of Nashian Allocation in Online p-Mean Welfare Maximization. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 98:1-98:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{huang_et_al:LIPIcs.ICALP.2025.98,
  author =	{Huang, Zhiyi and Lee, Chui Shan and Shu, Xinkai and Wang, Zhaozi},
  title =	{{The Long Arm of Nashian Allocation in Online p-Mean Welfare Maximization}},
  booktitle =	{52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
  pages =	{98:1--98:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-372-0},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{334},
  editor =	{Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.98},
  URN =		{urn:nbn:de:0030-drops-234754},
  doi =		{10.4230/LIPIcs.ICALP.2025.98},
  annote =	{Keywords: Online Algorithms, Fair Division, Nash Welfare}
}
Document
Uniform Bounds on Product Sylvester-Gallai Configurations

Authors: Abhibhav Garg, Rafael Oliveira, and Akash Kumar Sengupta

Published in: LIPIcs, Volume 332, 41st International Symposium on Computational Geometry (SoCG 2025)


Abstract
In this work, we explore a non-linear extension of the classical Sylvester-Gallai configuration. Let 𝕂 be an algebraically closed field of characteristic zero, and let ℱ = {F_1, …, F_m} ⊂ 𝕂[x_1, …, x_N] denote a collection of irreducible homogeneous polynomials of degree at most d, where each F_i is not a scalar multiple of any other F_j for i ≠ j. We define ℱ to be a product Sylvester-Gallai configuration if, for any two distinct polynomials F_i, F_j ∈ ℱ, the following condition is satisfied: ∏_{k≠i, j} F_k ∈ rad (F_i, F_j) . We prove that product Sylvester-Gallai configurations are inherently low dimensional. Specifically, we show that there exists a function λ : ℕ → ℕ, independent of 𝕂, N, and m, such that any product Sylvester-Gallai configuration must satisfy: dim(span_𝕂(ℱ)) ≤ λ(d). This result generalizes the main theorems from (Shpilka 2019, Peleg and Shpilka 2020, Oliveira and Sengupta 2023), and gets us one step closer to a full derandomization of the polynomial identity testing problem for the class of depth 4 circuits with bounded top and bottom fan-in.

Cite as

Abhibhav Garg, Rafael Oliveira, and Akash Kumar Sengupta. Uniform Bounds on Product Sylvester-Gallai Configurations. In 41st International Symposium on Computational Geometry (SoCG 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 332, pp. 52:1-52:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{garg_et_al:LIPIcs.SoCG.2025.52,
  author =	{Garg, Abhibhav and Oliveira, Rafael and Sengupta, Akash Kumar},
  title =	{{Uniform Bounds on Product Sylvester-Gallai Configurations}},
  booktitle =	{41st International Symposium on Computational Geometry (SoCG 2025)},
  pages =	{52:1--52:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-370-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{332},
  editor =	{Aichholzer, Oswin and Wang, Haitao},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2025.52},
  URN =		{urn:nbn:de:0030-drops-232043},
  doi =		{10.4230/LIPIcs.SoCG.2025.52},
  annote =	{Keywords: Sylvester-Gallai theorem, arrangements of hypersurfaces, algebraic complexity, polynomial identity testing, algebraic geometry, commutative algebra}
}
Document
On Approximability of 𝓁₂² Min-Sum Clustering

Authors: Karthik C. S., Euiwoong Lee, Yuval Rabani, Chris Schwiegelshohn, and Samson Zhou

Published in: LIPIcs, Volume 332, 41st International Symposium on Computational Geometry (SoCG 2025)


Abstract
The 𝓁₂² min-sum k-clustering problem is to partition an input set into clusters C_1,…,C_k to minimize ∑_{i=1}^k ∑_{p,q ∈ C_i} ‖p-q‖₂². Although 𝓁₂² min-sum k-clustering is NP-hard, it is not known whether it is NP-hard to approximate 𝓁₂² min-sum k-clustering beyond a certain factor. In this paper, we give the first hardness-of-approximation result for the 𝓁₂² min-sum k-clustering problem. We show that it is NP-hard to approximate the objective to a factor better than 1.056 and moreover, assuming a balanced variant of the Johnson Coverage Hypothesis, it is NP-hard to approximate the objective to a factor better than 1.327. We then complement our hardness result by giving a fast PTAS for 𝓁₂² min-sum k-clustering. Specifically, our algorithm runs in time O(n^{1+o(1)}d⋅ 2^{(k/ε)^O(1)}), which is the first nearly linear time algorithm for this problem. We also consider a learning-augmented setting, where the algorithm has access to an oracle that outputs a label i ∈ [k] for input point, thereby implicitly partitioning the input dataset into k clusters that induce an approximately optimal solution, up to some amount of adversarial error α ∈ [0,1/2). We give a polynomial-time algorithm that outputs a (1+γα)/(1-α)²-approximation to 𝓁₂² min-sum k-clustering, for a fixed constant γ > 0.

Cite as

Karthik C. S., Euiwoong Lee, Yuval Rabani, Chris Schwiegelshohn, and Samson Zhou. On Approximability of 𝓁₂² Min-Sum Clustering. In 41st International Symposium on Computational Geometry (SoCG 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 332, pp. 62:1-62:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{karthikc.s._et_al:LIPIcs.SoCG.2025.62,
  author =	{Karthik C. S. and Lee, Euiwoong and Rabani, Yuval and Schwiegelshohn, Chris and Zhou, Samson},
  title =	{{On Approximability of 𝓁₂² Min-Sum Clustering}},
  booktitle =	{41st International Symposium on Computational Geometry (SoCG 2025)},
  pages =	{62:1--62:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-370-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{332},
  editor =	{Aichholzer, Oswin and Wang, Haitao},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2025.62},
  URN =		{urn:nbn:de:0030-drops-232142},
  doi =		{10.4230/LIPIcs.SoCG.2025.62},
  annote =	{Keywords: Clustering, hardness of approximation, polynomial-time approximation schemes, learning-augmented algorithms}
}
Document
Dimension-Free Parameterized Approximation Schemes for Hybrid Clustering

Authors: Ameet Gadekar and Tanmay Inamdar

Published in: LIPIcs, Volume 327, 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)


Abstract
Hybrid k-Clustering is a model of clustering that generalizes two of the most widely studied clustering objectives: k-Center and k-Median. In this model, given a set of n points P, the goal is to find k centers such that the sum of the r-distances of each point to its nearest center is minimized. The r-distance between two points p and q is defined as max{dist(p, q)-r, 0} - this represents the distance of p to the boundary of the r-radius ball around q if p is outside the ball, and 0 otherwise. This problem was recently introduced by Fomin et al. [APPROX 2024], who designed a (1+ε, 1+ε)-bicrtieria approximation that runs in time 2^{(kd/ε)^{O(1)}} ⋅ n^{O(1)} for inputs in ℝ^d; such a bicriteria solution uses balls of radius (1+ε)r instead of r, and has a cost at most 1+ε times the cost of an optimal solution using balls of radius r. In this paper we significantly improve upon this result by designing an approximation algorithm with the same bicriteria guarantee, but with running time that is FPT only in k and ε - crucially, removing the exponential dependence on the dimension d. This resolves an open question posed in their paper. Our results extend further in several directions. First, our approximation scheme works in a broader class of metric spaces, including doubling spaces, minor-free, and bounded treewidth metrics. Secondly, our techniques yield a similar bicriteria FPT-approximation schemes for other variants of Hybrid k-Clustering, e.g., when the objective features the sum of z-th power of the r-distances. Finally, we also design a coreset for Hybrid k-Clustering in doubling spaces, answering another open question from the work of Fomin et al.

Cite as

Ameet Gadekar and Tanmay Inamdar. Dimension-Free Parameterized Approximation Schemes for Hybrid Clustering. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 35:1-35:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{gadekar_et_al:LIPIcs.STACS.2025.35,
  author =	{Gadekar, Ameet and Inamdar, Tanmay},
  title =	{{Dimension-Free Parameterized Approximation Schemes for Hybrid Clustering}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{35:1--35:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.35},
  URN =		{urn:nbn:de:0030-drops-228615},
  doi =		{10.4230/LIPIcs.STACS.2025.35},
  annote =	{Keywords: Clustering, Parameterized algorithms, FPT approximation, k-Median, k-Center}
}
Document
Propositional Logics of Overwhelming Truth

Authors: Thibaut Antoine and David Baelde

Published in: LIPIcs, Volume 326, 33rd EACSL Annual Conference on Computer Science Logic (CSL 2025)


Abstract
Cryptographers consider that asymptotic security holds when, for any possible attacker running in polynomial time, the probability that the attack succeeds is negligible, i.e. that it tends fast enough to zero with the size of secrets. In order to reason formally about cryptographic truth, one may thus consider logics where a formula is satisfied when it is true with overwhelming probability, i.e. a probability that tends fast enough to one with the size of secrets. In such logics it is not always the case that either ϕ or ⌝ϕ is satisfied by a given model. However, security analyses will inevitably involve specific formulas, which we call determined, satisfying this property - typically because they are not probabilistic. The Squirrel proof assistant, which implements a logic of overwhelming truth, features ad-hoc proof rules for this purpose. In this paper, we study several propositional logics whose semantics rely on overwhelming truth. We first consider a modal logic of overwhelming truth, and show that it coincides with S5. In addition to providing an axiomatization, this brings a well-behaved proof system for our logic in the form of Poggiolesi’s hypersequent calculus. Further, we show that this system can be adapted to elegantly incorporate reasoning on determined atoms. We then consider a logic that is closer to Squirrel’s language, where the overwhelming truth modality cannot be nested. In that case, we show that a simple proof system, based on regular sequents, is sound and complete. This result justifies the core of Squirrel’s proof system.

Cite as

Thibaut Antoine and David Baelde. Propositional Logics of Overwhelming Truth. In 33rd EACSL Annual Conference on Computer Science Logic (CSL 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 326, pp. 24:1-24:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{antoine_et_al:LIPIcs.CSL.2025.24,
  author =	{Antoine, Thibaut and Baelde, David},
  title =	{{Propositional Logics of Overwhelming Truth}},
  booktitle =	{33rd EACSL Annual Conference on Computer Science Logic (CSL 2025)},
  pages =	{24:1--24:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-362-1},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{326},
  editor =	{Endrullis, J\"{o}rg and Schmitz, Sylvain},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CSL.2025.24},
  URN =		{urn:nbn:de:0030-drops-227818},
  doi =		{10.4230/LIPIcs.CSL.2025.24},
  annote =	{Keywords: Cryptography, Modal Logic, Sequent Calculus}
}
Document
Vision
Machine Learning and Knowledge Graphs: Existing Gaps and Future Research Challenges

Authors: Claudia d'Amato, Louis Mahon, Pierre Monnin, and Giorgos Stamou

Published in: TGDK, Volume 1, Issue 1 (2023): Special Issue on Trends in Graph Data and Knowledge. Transactions on Graph Data and Knowledge, Volume 1, Issue 1


Abstract
The graph model is nowadays largely adopted to model a wide range of knowledge and data, spanning from social networks to knowledge graphs (KGs), representing a successful paradigm of how symbolic and transparent AI can scale on the World Wide Web. However, due to their unprecedented volume, they are generally tackled by Machine Learning (ML) and mostly numeric based methods such as graph embedding models (KGE) and deep neural networks (DNNs). The latter methods have been proved lately very efficient, leading the current AI spring. In this vision paper, we introduce some of the main existing methods for combining KGs and ML, divided into two categories: those using ML to improve KGs, and those using KGs to improve results on ML tasks. From this introduction, we highlight research gaps and perspectives that we deem promising and currently under-explored for the involved research communities, spanning from KG support for LLM prompting, integration of KG semantics in ML models to symbol-based methods, interpretability of ML models, and the need for improved benchmark datasets. In our opinion, such perspectives are stepping stones in an ultimate view of KGs as central assets for neuro-symbolic and explainable AI.

Cite as

Claudia d'Amato, Louis Mahon, Pierre Monnin, and Giorgos Stamou. Machine Learning and Knowledge Graphs: Existing Gaps and Future Research Challenges. In Special Issue on Trends in Graph Data and Knowledge. Transactions on Graph Data and Knowledge (TGDK), Volume 1, Issue 1, pp. 8:1-8:35, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@Article{damato_et_al:TGDK.1.1.8,
  author =	{d'Amato, Claudia and Mahon, Louis and Monnin, Pierre and Stamou, Giorgos},
  title =	{{Machine Learning and Knowledge Graphs: Existing Gaps and Future Research Challenges}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{8:1--8:35},
  year =	{2023},
  volume =	{1},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.1.1.8},
  URN =		{urn:nbn:de:0030-drops-194824},
  doi =		{10.4230/TGDK.1.1.8},
  annote =	{Keywords: Graph-based Learning, Knowledge Graph Embeddings, Large Language Models, Explainable AI, Knowledge Graph Completion \& Curation}
}
Document
DAISIM: A Computational Simulator for the MakerDAO Stablecoin

Authors: Shreyas Bhat, Ayten Betul Kahya, Bhaskar Krishnamachari, and Rohit Kumar

Published in: OASIcs, Volume 92, 4th International Symposium on Foundations and Applications of Blockchain 2021 (FAB 2021)


Abstract
We present a computational simulation of the single-collateral DAI stablecoin launched by the MakerDAO project in 2017. At the core of the simulation is a model of cryptocurrency investors acting as rational Markowitz mean-variance portfolio optimizers, with heterogeneous risk tolerance. The simulator, called DAISIM, incorporates automated order matching and price update mechanisms to determine the DAI price. We use the simulator to evaluate how the single-collateral DAI price, as well as portfolio allocations, vary for a given population of investors as a function of exogenous parameters such as the price of ETH and various system parameters including stability rate and transaction fee. DAISIM is being made available as open-source and may be useful in evaluating other similar projects.

Cite as

Shreyas Bhat, Ayten Betul Kahya, Bhaskar Krishnamachari, and Rohit Kumar. DAISIM: A Computational Simulator for the MakerDAO Stablecoin. In 4th International Symposium on Foundations and Applications of Blockchain 2021 (FAB 2021). Open Access Series in Informatics (OASIcs), Volume 92, pp. 3:1-3:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{bhat_et_al:OASIcs.FAB.2021.3,
  author =	{Bhat, Shreyas and Kahya, Ayten Betul and Krishnamachari, Bhaskar and Kumar, Rohit},
  title =	{{DAISIM: A Computational Simulator for the MakerDAO Stablecoin}},
  booktitle =	{4th International Symposium on Foundations and Applications of Blockchain 2021 (FAB 2021)},
  pages =	{3:1--3:13},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-196-2},
  ISSN =	{2190-6807},
  year =	{2021},
  volume =	{92},
  editor =	{Gramoli, Vincent and Sadoghi, Mohammad},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.FAB.2021.3},
  URN =		{urn:nbn:de:0030-drops-139899},
  doi =		{10.4230/OASIcs.FAB.2021.3},
  annote =	{Keywords: Stablecoin, Simulator, MakerDAO}
}
Document
RANDOM
Polynomial Identity Testing for Low Degree Polynomials with Optimal Randomness

Authors: Markus Bläser and Anurag Pandey

Published in: LIPIcs, Volume 176, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020)


Abstract
We give a randomized polynomial time algorithm for polynomial identity testing for the class of n-variate poynomials of degree bounded by d over a field 𝔽, in the blackbox setting. Our algorithm works for every field 𝔽 with | 𝔽 | ≥ d+1, and uses only d log n + log (1/ ε) + O(d log log n) random bits to achieve a success probability 1 - ε for some ε > 0. In the low degree regime that is d ≪ n, it hits the information theoretic lower bound and differs from it only in the lower order terms. Previous best known algorithms achieve the number of random bits (Guruswami-Xing, CCC'14 and Bshouty, ITCS'14) that are constant factor away from our bound. Like Bshouty, we use Sidon sets for our algorithm. However, we use a new construction of Sidon sets to achieve the improved bound. We also collect two simple constructions of hitting sets with information theoretically optimal size against the class of n-variate, degree d polynomials. Our contribution is that we give new, very simple proofs for both the constructions.

Cite as

Markus Bläser and Anurag Pandey. Polynomial Identity Testing for Low Degree Polynomials with Optimal Randomness. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 176, pp. 8:1-8:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{blaser_et_al:LIPIcs.APPROX/RANDOM.2020.8,
  author =	{Bl\"{a}ser, Markus and Pandey, Anurag},
  title =	{{Polynomial Identity Testing for Low Degree Polynomials with Optimal Randomness}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020)},
  pages =	{8:1--8:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-164-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{176},
  editor =	{Byrka, Jaros{\l}aw and Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2020.8},
  URN =		{urn:nbn:de:0030-drops-126112},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2020.8},
  annote =	{Keywords: Algebraic Complexity theory, Polynomial Identity Testing, Hitting Set, Pseudorandomness}
}
Document
Lower Bounds for Matrix Factorization

Authors: Mrinal Kumar and Ben Lee Volk

Published in: LIPIcs, Volume 169, 35th Computational Complexity Conference (CCC 2020)


Abstract
We study the problem of constructing explicit families of matrices which cannot be expressed as a product of a few sparse matrices. In addition to being a natural mathematical question on its own, this problem appears in various incarnations in computer science; the most significant being in the context of lower bounds for algebraic circuits which compute linear transformations, matrix rigidity and data structure lower bounds. We first show, for every constant d, a deterministic construction in time exp(n^(1-Ω(1/d))) of a family {M_n} of n × n matrices which cannot be expressed as a product M_n = A_1 ⋯ A_d where the total sparsity of A_1,…,A_d is less than n^(1+1/(2d)). In other words, any depth-d linear circuit computing the linear transformation M_n⋅ 𝐱 has size at least n^(1+Ω(1/d)). This improves upon the prior best lower bounds for this problem, which are barely super-linear, and were obtained by a long line of research based on the study of super-concentrators (albeit at the cost of a blow up in the time required to construct these matrices). We then outline an approach for proving improved lower bounds through a certain derandomization problem, and use this approach to prove asymptotically optimal quadratic lower bounds for natural special cases, which generalize many of the common matrix decompositions.

Cite as

Mrinal Kumar and Ben Lee Volk. Lower Bounds for Matrix Factorization. In 35th Computational Complexity Conference (CCC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 169, pp. 5:1-5:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{kumar_et_al:LIPIcs.CCC.2020.5,
  author =	{Kumar, Mrinal and Volk, Ben Lee},
  title =	{{Lower Bounds for Matrix Factorization}},
  booktitle =	{35th Computational Complexity Conference (CCC 2020)},
  pages =	{5:1--5:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-156-6},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{169},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2020.5},
  URN =		{urn:nbn:de:0030-drops-125578},
  doi =		{10.4230/LIPIcs.CCC.2020.5},
  annote =	{Keywords: Algebraic Complexity, Linear Circuits, Matrix Factorization, Lower Bounds}
}
  • Refine by Type
  • 17 Document/PDF
  • 12 Document/HTML

  • Refine by Publication Year
  • 11 2025
  • 1 2023
  • 1 2021
  • 2 2020
  • 2 2009

  • Refine by Author
  • 2 Khandekar, Rohit
  • 1 Alam, Fardina Fathmiul
  • 1 Antoine, Thibaut
  • 1 Ayala-Rincón, Mauricio
  • 1 Baelde, David
  • Show More...

  • Refine by Series/Journal
  • 14 LIPIcs
  • 2 OASIcs
  • 1 TGDK

  • Refine by Classification
  • 3 Theory of computation → Algebraic complexity theory
  • 3 Theory of computation → Computational geometry
  • 2 Computing methodologies → Artificial intelligence
  • 2 Theory of computation → Approximation algorithms analysis
  • 2 Theory of computation → Facility location and clustering
  • Show More...

  • Refine by Keyword
  • 2 Clustering
  • 1 Agent
  • 1 Algebraic Complexity
  • 1 Algebraic Complexity theory
  • 1 Anti-unification
  • Show More...

Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail