5 Search Results for "Shah, Devavrat"


Document
Computing the Exact Radius of Large Graphs

Authors: Stefan Funke, Claudius Proissl, and Sabine Storandt

Published in: LIPIcs, Volume 338, 23rd International Symposium on Experimental Algorithms (SEA 2025)


Abstract
The radius of a graph is an important structural parameter which plays a key role in social network analysis and related applications. It measures the minimum shortest path distance that is required to reach all nodes in the graph from a single node. A node from which all other nodes are within a distance equal to the radius is called a center of the graph. In a graph with n nodes and m edges, the center and the radius can be determined in Õ(nm) by computing shortest path distances between all pairs of nodes. Fine-grained complexity results suggest that asymptotically faster algorithms are unlikely to exist. In this paper, we describe a novel randomized algorithm for exact radius computation in weighted digraphs with an expected running time in Õ(d³m) where d is the so-called combinatorial dimension. Our methodology is inspired by Clarkson’s algorithm for LP-type problems. The value of d denotes the size of a basis, which is a smallest subset of nodes which enforce the same radius as the whole node set. While we show that there exist graphs with d ∈ Θ(n), our empirical analysis reveals that even large real-world graphs have small combinatorial dimension. This allows us to compute the radius in near-linear time on such instances. The significantly improved scalability can be clearly observed in our experimental evaluation on a diverse set of benchmark graphs.

Cite as

Stefan Funke, Claudius Proissl, and Sabine Storandt. Computing the Exact Radius of Large Graphs. In 23rd International Symposium on Experimental Algorithms (SEA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 338, pp. 17:1-17:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{funke_et_al:LIPIcs.SEA.2025.17,
  author =	{Funke, Stefan and Proissl, Claudius and Storandt, Sabine},
  title =	{{Computing the Exact Radius of Large Graphs}},
  booktitle =	{23rd International Symposium on Experimental Algorithms (SEA 2025)},
  pages =	{17:1--17:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-375-1},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{338},
  editor =	{Mutzel, Petra and Prezza, Nicola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2025.17},
  URN =		{urn:nbn:de:0030-drops-232555},
  doi =		{10.4230/LIPIcs.SEA.2025.17},
  annote =	{Keywords: Radius, Graph Center, LP-type, Combinatorial Dimension}
}
Document
Track A: Algorithms, Complexity and Games
One-Shot Learning for k-SAT

Authors: Andreas Galanis, Leslie Ann Goldberg, and Xusheng Zhang

Published in: LIPIcs, Volume 334, 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)


Abstract
Consider a k-SAT formula Φ where every variable appears at most d times, and let σ be a satisfying assignment of Φ sampled proportionally to e^{β m(σ)} where m(σ) is the number of variables set to true and β is a real parameter. Given Φ and σ, can we learn the value of β efficiently? This problem falls into a recent line of works about single-sample ("one-shot") learning of Markov random fields. The k-SAT setting we consider here was recently studied by Galanis, Kandiros, and Kalavasis (SODA'24) where they showed that single-sample learning is possible when roughly d ≤ 2^{k/6.45} and impossible when d ≥ (k+1) 2^{k-1}. Crucially, for their impossibility results they used the existence of unsatisfiable instances which, aside from the gap in d, left open the question of whether the feasibility threshold for one-shot learning is dictated by the satisfiability threshold of k-SAT formulas of bounded degree. Our main contribution is to answer this question negatively. We show that one-shot learning for k-SAT is infeasible well below the satisfiability threshold; in fact, we obtain impossibility results for degrees d as low as k² when β is sufficiently large, and bootstrap this to small values of β when d scales exponentially with k, via a probabilistic construction. On the positive side, we simplify the analysis of the learning algorithm and obtain significantly stronger bounds on d in terms of β. In particular, for the uniform case β → 0 that has been studied extensively in the sampling literature, our analysis shows that learning is possible under the condition d≲ 2^{k/2}. This is nearly optimal (up to constant factors) in the sense that it is known that sampling a uniformly-distributed satisfying assignment is NP-hard for d≳ 2^{k/2}.

Cite as

Andreas Galanis, Leslie Ann Goldberg, and Xusheng Zhang. One-Shot Learning for k-SAT. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 84:1-84:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{galanis_et_al:LIPIcs.ICALP.2025.84,
  author =	{Galanis, Andreas and Goldberg, Leslie Ann and Zhang, Xusheng},
  title =	{{One-Shot Learning for k-SAT}},
  booktitle =	{52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
  pages =	{84:1--84:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-372-0},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{334},
  editor =	{Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.84},
  URN =		{urn:nbn:de:0030-drops-234610},
  doi =		{10.4230/LIPIcs.ICALP.2025.84},
  annote =	{Keywords: Computational Learning Theory, k-SAT, Maximum likelihood estimation}
}
Document
When Does a Predictor Know Its Own Loss?

Authors: Aravind Gollakota, Parikshit Gopalan, Aayush Karan, Charlotte Peale, and Udi Wieder

Published in: LIPIcs, Volume 329, 6th Symposium on Foundations of Responsible Computing (FORC 2025)


Abstract
Given a predictor and a loss function, how well can we predict the loss that the predictor will incur on an input? This is the problem of loss prediction, a key computational task associated with uncertainty estimation for a predictor. In a classification setting, a predictor will typically predict a distribution over labels and hence have its own estimate of the loss that it will incur, given by the entropy of the predicted distribution. Should we trust this estimate? In other words, when does the predictor know what it knows and what it does not know? In this work we study the theoretical foundations of loss prediction. Our main contribution is to establish tight connections between nontrivial loss prediction and certain forms of multicalibration [Ursula Hébert-Johnson et al., 2018], a multigroup fairness notion that asks for calibrated predictions across computationally identifiable subgroups. Formally, we show that a loss predictor that is able to improve on the self-estimate of a predictor yields a witness to a failure of multicalibration, and vice versa. This has the implication that nontrivial loss prediction is in effect no easier or harder than auditing for multicalibration. We support our theoretical results with experiments that show a robust positive correlation between the multicalibration error of a predictor and the efficacy of training a loss predictor.

Cite as

Aravind Gollakota, Parikshit Gopalan, Aayush Karan, Charlotte Peale, and Udi Wieder. When Does a Predictor Know Its Own Loss?. In 6th Symposium on Foundations of Responsible Computing (FORC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 329, pp. 22:1-22:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{gollakota_et_al:LIPIcs.FORC.2025.22,
  author =	{Gollakota, Aravind and Gopalan, Parikshit and Karan, Aayush and Peale, Charlotte and Wieder, Udi},
  title =	{{When Does a Predictor Know Its Own Loss?}},
  booktitle =	{6th Symposium on Foundations of Responsible Computing (FORC 2025)},
  pages =	{22:1--22:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-367-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{329},
  editor =	{Bun, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2025.22},
  URN =		{urn:nbn:de:0030-drops-231490},
  doi =		{10.4230/LIPIcs.FORC.2025.22},
  annote =	{Keywords: loss prediction, multicalibration, active learning, algorithmic fairness, calibration, predictive uncertainty, uncertainty estimation, machine learning theory}
}
Document
Matrix Estimation, Latent Variable Model and Collaborative Filtering

Authors: Devavrat Shah

Published in: LIPIcs, Volume 93, 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)


Abstract
Estimating a matrix based on partial, noisy observations is prevalent in variety of modern applications with recommendation system being a prototypical example. The non-parametric latent variable model provides canonical representation for such matrix data when the underlying distribution satisfies ``exchangeability'' with graphons and stochastic block model being recent examples of interest. Collaborative filtering has been a successfully utilized heuristic in practice since the dawn of e-commerce. In this extended abstract, we will argue that collaborative filtering (and its variants) solve matrix estimation for a generic latent variable model with near optimal sample complexity.

Cite as

Devavrat Shah. Matrix Estimation, Latent Variable Model and Collaborative Filtering. In 37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 93, pp. 4:1-4:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{shah:LIPIcs.FSTTCS.2017.4,
  author =	{Shah, Devavrat},
  title =	{{Matrix Estimation, Latent Variable Model and Collaborative Filtering}},
  booktitle =	{37th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2017)},
  pages =	{4:1--4:8},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-055-2},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{93},
  editor =	{Lokam, Satya and Ramanujam, R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2017.4},
  URN =		{urn:nbn:de:0030-drops-84193},
  doi =		{10.4230/LIPIcs.FSTTCS.2017.4},
  annote =	{Keywords: Matrix Estimation, Graphon Estimation, Collaborative Filtering}
}
Document
Invited Talk
Compute Choice (Invited Talk)

Authors: Devavrat Shah

Published in: LIPIcs, Volume 55, 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)


Abstract
In this talk, we shall discuss the question of learning distribution over permutations of n choices based on partial observations. This is central to capturing the so called "choice" in a variety of contexts: understanding preferences of consumers over a collection of products based on purchasing and browsing data in the setting of retail and e-commerce, learning public opinion amongst a collection of socio-economic issues based on sparse polling data, and deciding a ranking of teams or players based on outcomes of games. The talk will primarily discuss the relationship between the ability to learn, nature of partial information and number of available observations. Connections to the classical theory of social choice and behavioral psychology, as well as modern literature in Statistics, learning theory and operations research will be discussed.

Cite as

Devavrat Shah. Compute Choice (Invited Talk). In 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 55, p. 1:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{shah:LIPIcs.ICALP.2016.1,
  author =	{Shah, Devavrat},
  title =	{{Compute Choice}},
  booktitle =	{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},
  pages =	{1:1--1:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-013-2},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{55},
  editor =	{Chatzigiannakis, Ioannis and Mitzenmacher, Michael and Rabani, Yuval and Sangiorgi, Davide},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2016.1},
  URN =		{urn:nbn:de:0030-drops-63374},
  doi =		{10.4230/LIPIcs.ICALP.2016.1},
  annote =	{Keywords: Decision Systems, Learning Distributions, Partial observations}
}
  • Refine by Type
  • 5 Document/PDF
  • 3 Document/HTML

  • Refine by Publication Year
  • 3 2025
  • 1 2018
  • 1 2016

  • Refine by Author
  • 2 Shah, Devavrat
  • 1 Funke, Stefan
  • 1 Galanis, Andreas
  • 1 Goldberg, Leslie Ann
  • 1 Gollakota, Aravind
  • Show More...

  • Refine by Series/Journal
  • 5 LIPIcs

  • Refine by Classification
  • 2 Theory of computation → Machine learning theory
  • 1 Mathematics of computing → Maximum likelihood estimation
  • 1 Theory of computation → Graph algorithms analysis
  • 1 Theory of computation → Parameterized complexity and exact algorithms
  • 1 Theory of computation → Problems, reductions and completeness

  • Refine by Keyword
  • 1 Collaborative Filtering
  • 1 Combinatorial Dimension
  • 1 Computational Learning Theory
  • 1 Decision Systems
  • 1 Graph Center
  • Show More...

Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail