11 Search Results for "Agrawal, Rohit"


Document
Tight Guarantees for Cut-Relative Survivable Network Design via a Decomposition Technique

Authors: Nikhil Kumar, J. J. Nan, and Chaitanya Swamy

Published in: LIPIcs, Volume 351, 33rd Annual European Symposium on Algorithms (ESA 2025)


Abstract
In the classical survivable-network-design problem (SNDP), we are given an undirected graph G = (V, E), non-negative edge costs, and some k tuples (s_i,t_i,r_i), where s_i,t_i ∈ V and r_i ∈ ℤ_+. The objective is to find a minimum-cost subset H ⊆ E such that each s_i-t_i pair remains connected even after the failure of any r_i-1 edges. It is well-known that SNDP can be equivalently modeled using a weakly-supermodular cut-requirement function f, where the objective is to find the minimum-cost subset of edges that picks at least f(S) edges across every cut S ⊆ V. Recently, motivated by fault-tolerance in graph spanners, Dinitz, Koranteng, and Kortsartz proposed a variant of SNDP that enforces a relative level of fault tolerance with respect to G. Even if a feasible SNDP-solution may not exist due to G lacking the required fault-tolerance, the goal is to find a solution H that is at least as fault-tolerant as G itself. They formalize the latter condition in terms of paths and fault-sets, which gives rise to path-relative SNDP (which they call relative SNDP). Along these lines, we introduce a new model of relative network design, called cut-relative SNDP (CR-SNDP), where the goal is to select a minimum-cost subset of edges that satisfies the given (weakly-supermodular) cut-requirement function to the maximum extent possible, i.e., by picking min{f(S), |δ_G(S)|} edges across every cut S ⊆ V. Unlike SNDP, the cut-relative and path-relative versions of SNDP are not equivalent. The resulting cut-requirement function for CR-SNDP (as also path-relative SNDP) is not weakly supermodular, and extreme-point solutions to the natural LP-relaxation need not correspond to a laminar family of tight cut constraints. Consequently, standard techniques cannot be used directly to design approximation algorithms for this problem. We develop a novel decomposition technique to circumvent this difficulty and use it to give a tight 2-approximation algorithm for CR-SNDP. We also show some new hardness results for these relative-SNDP problems.

Cite as

Nikhil Kumar, J. J. Nan, and Chaitanya Swamy. Tight Guarantees for Cut-Relative Survivable Network Design via a Decomposition Technique. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 38:1-38:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{kumar_et_al:LIPIcs.ESA.2025.38,
  author =	{Kumar, Nikhil and Nan, J. J. and Swamy, Chaitanya},
  title =	{{Tight Guarantees for Cut-Relative Survivable Network Design via a Decomposition Technique}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{38:1--38:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.38},
  URN =		{urn:nbn:de:0030-drops-245061},
  doi =		{10.4230/LIPIcs.ESA.2025.38},
  annote =	{Keywords: Approximation algorithms, Network Design, Cut-requirement functions, Weak Supermodularity, Iterative rounding, LP rounding algorithms}
}
Document
RANDOM
Efficient Polynomial Identity Testing over Nonassociative Algebras

Authors: Partha Mukhopadhyay, C. Ramya, and Pratik Shastri

Published in: LIPIcs, Volume 353, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2025)


Abstract
We design the first efficient polynomial identity testing algorithms over the nonassociative polynomial algebra. In particular, multiplication among the formal variables is commutative but it is not associative. This complements the strong lower bound results obtained over this algebra by Hrubeš, Yehudayoff, and Wigderson [Pavel Hrubes et al., 2010] and Fijalkow, Lagarde, Ohlmann, and Serre [Fijalkow et al., 2021] from the identity testing perspective. Our main results are the following: - We construct nonassociative algebras (both commutative and noncommutative) which have no low degree identities. As a result, we obtain the first Amitsur-Levitzki type theorems [A. S. Amitsur and J. Levitzki, 1950] over nonassociative polynomial algebras. As a direct consequence, we obtain randomized polynomial-time black-box PIT algorithms for nonassociative polynomials which allow evaluation over such algebras. - On the derandomization side, we give a deterministic polynomial-time identity testing algorithm for nonassociative polynomials given by arithmetic circuits in the white-box setting. Previously, such an algorithm was known with the additional restriction of noncommutativity [Vikraman Arvind et al., 2017]. - In the black-box setting, we construct a hitting set of quasipolynomial-size for nonassociative polynomials computed by arithmetic circuits of small depth. Understanding the black-box complexity of identity testing, even in the randomized setting, was open prior to our work.

Cite as

Partha Mukhopadhyay, C. Ramya, and Pratik Shastri. Efficient Polynomial Identity Testing over Nonassociative Algebras. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 353, pp. 56:1-56:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{mukhopadhyay_et_al:LIPIcs.APPROX/RANDOM.2025.56,
  author =	{Mukhopadhyay, Partha and C. Ramya and Shastri, Pratik},
  title =	{{Efficient Polynomial Identity Testing over Nonassociative Algebras}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2025)},
  pages =	{56:1--56:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-397-3},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{353},
  editor =	{Ene, Alina and Chattopadhyay, Eshan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2025.56},
  URN =		{urn:nbn:de:0030-drops-244224},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2025.56},
  annote =	{Keywords: Polynomial identity testing, nonassociative algebra, arithmetic circuits, black-box algorithms, white-box algorithms}
}
Document
Omega-Regular Verification and Control for Distributional Specifications in MDPs

Authors: S. Akshay, Ouldouz Neysari, and Ðorđe Žikelić

Published in: LIPIcs, Volume 348, 36th International Conference on Concurrency Theory (CONCUR 2025)


Abstract
A classical approach to studying Markov decision processes (MDPs) is to view them as state transformers. However, MDPs can also be viewed as distribution transformers, where an MDP under a strategy generates a sequence of probability distributions over MDP states. This view arises in several applications, even as the probabilistic model checking problem becomes much harder compared to the classical state transformer counterpart. It is known that even distributional reachability and safety problems become computationally intractable (Skolem- and positivity-hard). To address this challenge, recent works focused on sound but possibly incomplete methods for verification and control of MDPs under the distributional view. However, existing automated methods are applicable only to distributional reachability, safety and reach-avoidance specifications. In this work, we present the first automated method for verification and control of MDPs with respect to distributional omega-regular specifications. To achieve this, we propose a novel notion of distributional certificates, which are sound and complete proof rules for proving that an MDP under a distributionally memoryless strategy satisfies some distributional omega-regular specification. We then use our distributional certificates to design the first fully automated algorithms for verification and control of MDPs with respect to distributional omega-regular specifications. Our algorithms follow a template-based synthesis approach and provide soundness and relative completeness guarantees, while running in PSPACE. Our prototype implementation demonstrates practical applicability of our algorithms to challenging examples collected from the literature.

Cite as

S. Akshay, Ouldouz Neysari, and Ðorđe Žikelić. Omega-Regular Verification and Control for Distributional Specifications in MDPs. In 36th International Conference on Concurrency Theory (CONCUR 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 348, pp. 6:1-6:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{akshay_et_al:LIPIcs.CONCUR.2025.6,
  author =	{Akshay, S. and Neysari, Ouldouz and \v{Z}ikeli\'{c}, Ðor{\d}e},
  title =	{{Omega-Regular Verification and Control for Distributional Specifications in MDPs}},
  booktitle =	{36th International Conference on Concurrency Theory (CONCUR 2025)},
  pages =	{6:1--6:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-389-8},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{348},
  editor =	{Bouyer, Patricia and van de Pol, Jaco},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2025.6},
  URN =		{urn:nbn:de:0030-drops-239562},
  doi =		{10.4230/LIPIcs.CONCUR.2025.6},
  annote =	{Keywords: MDPs, Distributional objectives, \omega-regularity, Certificates}
}
Document
Near-Optimal Averaging Samplers and Matrix Samplers

Authors: Zhiyang Xun and David Zuckerman

Published in: LIPIcs, Volume 339, 40th Computational Complexity Conference (CCC 2025)


Abstract
We present the first efficient averaging sampler that achieves asymptotically optimal randomness complexity and near-optimal sample complexity. For any δ < ε and any constant α > 0, our sampler uses m + O(log (1 / δ)) random bits to output t = O((1/ε² log 1/δ)^{1 + α}) samples Z_1, … , Z_t ∈ {0, 1}^m such that for any function f: {0, 1}^m → [0, 1], Pr[|1/t∑_{i=1}^t f(Z_i) - 𝔼[f]| ≤ ε] ≥ 1 - δ. The randomness complexity is optimal up to a constant factor, and the sample complexity is optimal up to the O((1/(ε²) log 1/(δ))^α) factor. Our technique generalizes to matrix samplers. A matrix sampler is defined similarly, except that f: {0, 1}^m → ℂ^{d×d} and the absolute value is replaced by the spectral norm. Our matrix sampler achieves randomness complexity m + Õ(log(d / δ)) and sample complexity O((1/ε² log d/δ)^{1 + α}) for any constant α > 0, both near-optimal with only a logarithmic factor in randomness complexity and an additional α exponent on the sample complexity. We use known connections with randomness extractors and list-decodable codes to give applications to these objects. Specifically, we give the first extractor construction with optimal seed length up to an arbitrarily small constant factor above 1, when the min-entropy k = β n for a large enough constant β < 1. Finally, we generalize the definition of averaging sampler to any normed vector space.

Cite as

Zhiyang Xun and David Zuckerman. Near-Optimal Averaging Samplers and Matrix Samplers. In 40th Computational Complexity Conference (CCC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 339, pp. 6:1-6:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{xun_et_al:LIPIcs.CCC.2025.6,
  author =	{Xun, Zhiyang and Zuckerman, David},
  title =	{{Near-Optimal Averaging Samplers and Matrix Samplers}},
  booktitle =	{40th Computational Complexity Conference (CCC 2025)},
  pages =	{6:1--6:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-379-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{339},
  editor =	{Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2025.6},
  URN =		{urn:nbn:de:0030-drops-237001},
  doi =		{10.4230/LIPIcs.CCC.2025.6},
  annote =	{Keywords: Pseudorandomness, Averaging Samplers, Randomness Extractors}
}
Document
Uniform Bounds on Product Sylvester-Gallai Configurations

Authors: Abhibhav Garg, Rafael Oliveira, and Akash Kumar Sengupta

Published in: LIPIcs, Volume 332, 41st International Symposium on Computational Geometry (SoCG 2025)


Abstract
In this work, we explore a non-linear extension of the classical Sylvester-Gallai configuration. Let 𝕂 be an algebraically closed field of characteristic zero, and let ℱ = {F_1, …, F_m} ⊂ 𝕂[x_1, …, x_N] denote a collection of irreducible homogeneous polynomials of degree at most d, where each F_i is not a scalar multiple of any other F_j for i ≠ j. We define ℱ to be a product Sylvester-Gallai configuration if, for any two distinct polynomials F_i, F_j ∈ ℱ, the following condition is satisfied: ∏_{k≠i, j} F_k ∈ rad (F_i, F_j) . We prove that product Sylvester-Gallai configurations are inherently low dimensional. Specifically, we show that there exists a function λ : ℕ → ℕ, independent of 𝕂, N, and m, such that any product Sylvester-Gallai configuration must satisfy: dim(span_𝕂(ℱ)) ≤ λ(d). This result generalizes the main theorems from (Shpilka 2019, Peleg and Shpilka 2020, Oliveira and Sengupta 2023), and gets us one step closer to a full derandomization of the polynomial identity testing problem for the class of depth 4 circuits with bounded top and bottom fan-in.

Cite as

Abhibhav Garg, Rafael Oliveira, and Akash Kumar Sengupta. Uniform Bounds on Product Sylvester-Gallai Configurations. In 41st International Symposium on Computational Geometry (SoCG 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 332, pp. 52:1-52:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{garg_et_al:LIPIcs.SoCG.2025.52,
  author =	{Garg, Abhibhav and Oliveira, Rafael and Sengupta, Akash Kumar},
  title =	{{Uniform Bounds on Product Sylvester-Gallai Configurations}},
  booktitle =	{41st International Symposium on Computational Geometry (SoCG 2025)},
  pages =	{52:1--52:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-370-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{332},
  editor =	{Aichholzer, Oswin and Wang, Haitao},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2025.52},
  URN =		{urn:nbn:de:0030-drops-232043},
  doi =		{10.4230/LIPIcs.SoCG.2025.52},
  annote =	{Keywords: Sylvester-Gallai theorem, arrangements of hypersurfaces, algebraic complexity, polynomial identity testing, algebraic geometry, commutative algebra}
}
Document
Dimension-Free Parameterized Approximation Schemes for Hybrid Clustering

Authors: Ameet Gadekar and Tanmay Inamdar

Published in: LIPIcs, Volume 327, 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)


Abstract
Hybrid k-Clustering is a model of clustering that generalizes two of the most widely studied clustering objectives: k-Center and k-Median. In this model, given a set of n points P, the goal is to find k centers such that the sum of the r-distances of each point to its nearest center is minimized. The r-distance between two points p and q is defined as max{dist(p, q)-r, 0} - this represents the distance of p to the boundary of the r-radius ball around q if p is outside the ball, and 0 otherwise. This problem was recently introduced by Fomin et al. [APPROX 2024], who designed a (1+ε, 1+ε)-bicrtieria approximation that runs in time 2^{(kd/ε)^{O(1)}} ⋅ n^{O(1)} for inputs in ℝ^d; such a bicriteria solution uses balls of radius (1+ε)r instead of r, and has a cost at most 1+ε times the cost of an optimal solution using balls of radius r. In this paper we significantly improve upon this result by designing an approximation algorithm with the same bicriteria guarantee, but with running time that is FPT only in k and ε - crucially, removing the exponential dependence on the dimension d. This resolves an open question posed in their paper. Our results extend further in several directions. First, our approximation scheme works in a broader class of metric spaces, including doubling spaces, minor-free, and bounded treewidth metrics. Secondly, our techniques yield a similar bicriteria FPT-approximation schemes for other variants of Hybrid k-Clustering, e.g., when the objective features the sum of z-th power of the r-distances. Finally, we also design a coreset for Hybrid k-Clustering in doubling spaces, answering another open question from the work of Fomin et al.

Cite as

Ameet Gadekar and Tanmay Inamdar. Dimension-Free Parameterized Approximation Schemes for Hybrid Clustering. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 35:1-35:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{gadekar_et_al:LIPIcs.STACS.2025.35,
  author =	{Gadekar, Ameet and Inamdar, Tanmay},
  title =	{{Dimension-Free Parameterized Approximation Schemes for Hybrid Clustering}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{35:1--35:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.35},
  URN =		{urn:nbn:de:0030-drops-228615},
  doi =		{10.4230/LIPIcs.STACS.2025.35},
  annote =	{Keywords: Clustering, Parameterized algorithms, FPT approximation, k-Median, k-Center}
}
Document
Position
Knowledge Graphs for the Life Sciences: Recent Developments, Challenges and Opportunities

Authors: Jiaoyan Chen, Hang Dong, Janna Hastings, Ernesto Jiménez-Ruiz, Vanessa López, Pierre Monnin, Catia Pesquita, Petr Škoda, and Valentina Tamma

Published in: TGDK, Volume 1, Issue 1 (2023): Special Issue on Trends in Graph Data and Knowledge. Transactions on Graph Data and Knowledge, Volume 1, Issue 1


Abstract
The term life sciences refers to the disciplines that study living organisms and life processes, and include chemistry, biology, medicine, and a range of other related disciplines. Research efforts in life sciences are heavily data-driven, as they produce and consume vast amounts of scientific data, much of which is intrinsically relational and graph-structured. The volume of data and the complexity of scientific concepts and relations referred to therein promote the application of advanced knowledge-driven technologies for managing and interpreting data, with the ultimate aim to advance scientific discovery. In this survey and position paper, we discuss recent developments and advances in the use of graph-based technologies in life sciences and set out a vision for how these technologies will impact these fields into the future. We focus on three broad topics: the construction and management of Knowledge Graphs (KGs), the use of KGs and associated technologies in the discovery of new knowledge, and the use of KGs in artificial intelligence applications to support explanations (explainable AI). We select a few exemplary use cases for each topic, discuss the challenges and open research questions within these topics, and conclude with a perspective and outlook that summarizes the overarching challenges and their potential solutions as a guide for future research.

Cite as

Jiaoyan Chen, Hang Dong, Janna Hastings, Ernesto Jiménez-Ruiz, Vanessa López, Pierre Monnin, Catia Pesquita, Petr Škoda, and Valentina Tamma. Knowledge Graphs for the Life Sciences: Recent Developments, Challenges and Opportunities. In Special Issue on Trends in Graph Data and Knowledge. Transactions on Graph Data and Knowledge (TGDK), Volume 1, Issue 1, pp. 5:1-5:33, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@Article{chen_et_al:TGDK.1.1.5,
  author =	{Chen, Jiaoyan and Dong, Hang and Hastings, Janna and Jim\'{e}nez-Ruiz, Ernesto and L\'{o}pez, Vanessa and Monnin, Pierre and Pesquita, Catia and \v{S}koda, Petr and Tamma, Valentina},
  title =	{{Knowledge Graphs for the Life Sciences: Recent Developments, Challenges and Opportunities}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{5:1--5:33},
  year =	{2023},
  volume =	{1},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.1.1.5},
  URN =		{urn:nbn:de:0030-drops-194791},
  doi =		{10.4230/TGDK.1.1.5},
  annote =	{Keywords: Knowledge graphs, Life science, Knowledge discovery, Explainable AI}
}
Document
Vision
Machine Learning and Knowledge Graphs: Existing Gaps and Future Research Challenges

Authors: Claudia d'Amato, Louis Mahon, Pierre Monnin, and Giorgos Stamou

Published in: TGDK, Volume 1, Issue 1 (2023): Special Issue on Trends in Graph Data and Knowledge. Transactions on Graph Data and Knowledge, Volume 1, Issue 1


Abstract
The graph model is nowadays largely adopted to model a wide range of knowledge and data, spanning from social networks to knowledge graphs (KGs), representing a successful paradigm of how symbolic and transparent AI can scale on the World Wide Web. However, due to their unprecedented volume, they are generally tackled by Machine Learning (ML) and mostly numeric based methods such as graph embedding models (KGE) and deep neural networks (DNNs). The latter methods have been proved lately very efficient, leading the current AI spring. In this vision paper, we introduce some of the main existing methods for combining KGs and ML, divided into two categories: those using ML to improve KGs, and those using KGs to improve results on ML tasks. From this introduction, we highlight research gaps and perspectives that we deem promising and currently under-explored for the involved research communities, spanning from KG support for LLM prompting, integration of KG semantics in ML models to symbol-based methods, interpretability of ML models, and the need for improved benchmark datasets. In our opinion, such perspectives are stepping stones in an ultimate view of KGs as central assets for neuro-symbolic and explainable AI.

Cite as

Claudia d'Amato, Louis Mahon, Pierre Monnin, and Giorgos Stamou. Machine Learning and Knowledge Graphs: Existing Gaps and Future Research Challenges. In Special Issue on Trends in Graph Data and Knowledge. Transactions on Graph Data and Knowledge (TGDK), Volume 1, Issue 1, pp. 8:1-8:35, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@Article{damato_et_al:TGDK.1.1.8,
  author =	{d'Amato, Claudia and Mahon, Louis and Monnin, Pierre and Stamou, Giorgos},
  title =	{{Machine Learning and Knowledge Graphs: Existing Gaps and Future Research Challenges}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{8:1--8:35},
  year =	{2023},
  volume =	{1},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.1.1.8},
  URN =		{urn:nbn:de:0030-drops-194824},
  doi =		{10.4230/TGDK.1.1.8},
  annote =	{Keywords: Graph-based Learning, Knowledge Graph Embeddings, Large Language Models, Explainable AI, Knowledge Graph Completion \& Curation}
}
Document
RANDOM
Samplers and Extractors for Unbounded Functions

Authors: Rohit Agrawal

Published in: LIPIcs, Volume 145, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019)


Abstract
Błasiok (SODA'18) recently introduced the notion of a subgaussian sampler, defined as an averaging sampler for approximating the mean of functions f from {0,1}^m to the real numbers such that f(U_m) has subgaussian tails, and asked for explicit constructions. In this work, we give the first explicit constructions of subgaussian samplers (and in fact averaging samplers for the broader class of subexponential functions) that match the best known constructions of averaging samplers for [0,1]-bounded functions in the regime of parameters where the approximation error epsilon and failure probability delta are subconstant. Our constructions are established via an extension of the standard notion of randomness extractor (Nisan and Zuckerman, JCSS'96) where the error is measured by an arbitrary divergence rather than total variation distance, and a generalization of Zuckerman’s equivalence (Random Struct. Alg.'97) between extractors and samplers. We believe that the framework we develop, and specifically the notion of an extractor for the Kullback-Leibler (KL) divergence, are of independent interest. In particular, KL-extractors are stronger than both standard extractors and subgaussian samplers, but we show that they exist with essentially the same parameters (constructively and non-constructively) as standard extractors.

Cite as

Rohit Agrawal. Samplers and Extractors for Unbounded Functions. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 145, pp. 59:1-59:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{agrawal:LIPIcs.APPROX-RANDOM.2019.59,
  author =	{Agrawal, Rohit},
  title =	{{Samplers and Extractors for Unbounded Functions}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019)},
  pages =	{59:1--59:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-125-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{145},
  editor =	{Achlioptas, Dimitris and V\'{e}gh, L\'{a}szl\'{o} A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2019.59},
  URN =		{urn:nbn:de:0030-drops-112749},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2019.59},
  annote =	{Keywords: averaging samplers, subgaussian samplers, randomness extractors, Kullback-Leibler divergence}
}
Document
Quasipolynomial Hitting Sets for Circuits with Restricted Parse Trees

Authors: Ramprasad Saptharishi and Anamay Tengse

Published in: LIPIcs, Volume 122, 38th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2018)


Abstract
We study the class of non-commutative Unambiguous circuits or Unique-Parse-Tree (UPT) circuits, and a related model of Few-Parse-Trees (FewPT) circuits (which were recently introduced by Lagarde, Malod and Perifel [Guillaume Lagarde et al., 2016] and Lagarde, Limaye and Srinivasan [Guillaume Lagarde et al., 2017]) and give the following constructions: - An explicit hitting set of quasipolynomial size for UPT circuits, - An explicit hitting set of quasipolynomial size for FewPT circuits (circuits with constantly many parse tree shapes), - An explicit hitting set of polynomial size for UPT circuits (of known parse tree shape), when a parameter of preimage-width is bounded by a constant. The above three results are extensions of the results of [Manindra Agrawal et al., 2015], [Rohit Gurjar et al., 2015] and [Rohit Gurjar et al., 2016] to the setting of UPT circuits, and hence also generalize their results in the commutative world from read-once oblivious algebraic branching programs (ROABPs) to UPT-set-multilinear circuits. The main idea is to study shufflings of non-commutative polynomials, which can then be used to prove suitable depth reduction results for UPT circuits and thereby allow a careful translation of the ideas in [Manindra Agrawal et al., 2015], [Rohit Gurjar et al., 2015] and [Rohit Gurjar et al., 2016].

Cite as

Ramprasad Saptharishi and Anamay Tengse. Quasipolynomial Hitting Sets for Circuits with Restricted Parse Trees. In 38th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 122, pp. 6:1-6:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{saptharishi_et_al:LIPIcs.FSTTCS.2018.6,
  author =	{Saptharishi, Ramprasad and Tengse, Anamay},
  title =	{{Quasipolynomial Hitting Sets for Circuits with Restricted Parse Trees}},
  booktitle =	{38th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2018)},
  pages =	{6:1--6:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-093-4},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{122},
  editor =	{Ganguly, Sumit and Pandya, Paritosh},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2018.6},
  URN =		{urn:nbn:de:0030-drops-99050},
  doi =		{10.4230/LIPIcs.FSTTCS.2018.6},
  annote =	{Keywords: Unambiguous Circuits, Read-once Oblivious ABPs, Polynomial Identity Testing, Lower Bounds, Algebraic Circuit Complexity}
}
Document
Optimal Scheduling of Periodic Gang Tasks

Authors: Joël Goossens and Pascal Richard

Published in: LITES, Volume 3, Issue 1 (2016). Leibniz Transactions on Embedded Systems, Volume 3, Issue 1


Abstract
The gang scheduling of parallel implicit-deadline periodic task systems upon identical multiprocessor platforms is considered. In this scheduling problem, parallel tasks use several processors simultaneously. We propose two DPFAIR (deadline partitioning) algorithms that schedule all jobs in every interval of time delimited by two subsequent deadlines. These algorithms define a static schedule pattern that is stretched at run-time in every interval of the DPFAIR schedule. The first algorithm is based on linear programming and is the first one to be proved  optimal for the considered gang scheduling problem. Furthermore, it runs in polynomial time for a fixed number m of processors and an efficient implementation is fully detailed. The second algorithm is an approximation algorithm based on a fixed-priority rule that is competitive under resource augmentation analysis in order to compute an optimal schedule pattern. Precisely, its speedup factor is bounded by (2-1/m). Both algorithms are also evaluated through intensive numerical experiments.

Cite as

Joël Goossens and Pascal Richard. Optimal Scheduling of Periodic Gang Tasks. In LITES, Volume 3, Issue 1 (2016). Leibniz Transactions on Embedded Systems, Volume 3, Issue 1, pp. 04:1-04:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@Article{goossens_et_al:LITES-v003-i001-a004,
  author =	{Goossens, Jo\"{e}l and Richard, Pascal},
  title =	{{Optimal Scheduling of Periodic Gang Tasks}},
  journal =	{Leibniz Transactions on Embedded Systems},
  pages =	{04:1--04:18},
  ISSN =	{2199-2002},
  year =	{2016},
  volume =	{3},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LITES-v003-i001-a004},
  URN =		{urn:nbn:de:0030-drops-192593},
  doi =		{10.4230/LITES-v003-i001-a004},
  annote =	{Keywords: Real-time systems, Scheduling, Parallel tasks}
}
  • Refine by Type
  • 11 Document/PDF
  • 8 Document/HTML

  • Refine by Publication Year
  • 6 2025
  • 2 2023
  • 1 2019
  • 1 2018
  • 1 2016

  • Refine by Author
  • 2 Monnin, Pierre
  • 1 Agrawal, Rohit
  • 1 Akshay, S.
  • 1 C. Ramya
  • 1 Chen, Jiaoyan
  • Show More...

  • Refine by Series/Journal
  • 8 LIPIcs
  • 1 LITES
  • 2 TGDK

  • Refine by Classification
  • 3 Theory of computation → Algebraic complexity theory
  • 2 Theory of computation → Pseudorandomness and derandomization
  • 1 Applied computing → Life and medical sciences
  • 1 Computer systems organization → Embedded and cyber-physical systems
  • 1 Computer systems organization → Real-time systems
  • Show More...

  • Refine by Keyword
  • 2 Explainable AI
  • 1 Algebraic Circuit Complexity
  • 1 Approximation algorithms
  • 1 Averaging Samplers
  • 1 Certificates
  • Show More...

Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail