Creative Commons Attribution 4.0 International license
Consider a k-SAT formula Φ where every variable appears at most d times, and let σ be a satisfying assignment of Φ sampled proportionally to e^{β m(σ)} where m(σ) is the number of variables set to true and β is a real parameter. Given Φ and σ, can we learn the value of β efficiently?
This problem falls into a recent line of works about single-sample ("one-shot") learning of Markov random fields. The k-SAT setting we consider here was recently studied by Galanis, Kandiros, and Kalavasis (SODA'24) where they showed that single-sample learning is possible when roughly d ≤ 2^{k/6.45} and impossible when d ≥ (k+1) 2^{k-1}. Crucially, for their impossibility results they used the existence of unsatisfiable instances which, aside from the gap in d, left open the question of whether the feasibility threshold for one-shot learning is dictated by the satisfiability threshold of k-SAT formulas of bounded degree.
Our main contribution is to answer this question negatively. We show that one-shot learning for k-SAT is infeasible well below the satisfiability threshold; in fact, we obtain impossibility results for degrees d as low as k² when β is sufficiently large, and bootstrap this to small values of β when d scales exponentially with k, via a probabilistic construction. On the positive side, we simplify the analysis of the learning algorithm and obtain significantly stronger bounds on d in terms of β. In particular, for the uniform case β → 0 that has been studied extensively in the sampling literature, our analysis shows that learning is possible under the condition d≲ 2^{k/2}. This is nearly optimal (up to constant factors) in the sense that it is known that sampling a uniformly-distributed satisfying assignment is NP-hard for d≳ 2^{k/2}.
@InProceedings{galanis_et_al:LIPIcs.ICALP.2025.84,
author = {Galanis, Andreas and Goldberg, Leslie Ann and Zhang, Xusheng},
title = {{One-Shot Learning for k-SAT}},
booktitle = {52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
pages = {84:1--84:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-372-0},
ISSN = {1868-8969},
year = {2025},
volume = {334},
editor = {Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.84},
URN = {urn:nbn:de:0030-drops-234610},
doi = {10.4230/LIPIcs.ICALP.2025.84},
annote = {Keywords: Computational Learning Theory, k-SAT, Maximum likelihood estimation}
}