LIPIcs.APPROX-RANDOM.2021.29.pdf
- Filesize: 0.72 MB
- 19 pages
In this work, we show the first worst-case to average-case reduction for the classical k-SUM problem. A k-SUM instance is a collection of m integers, and the goal of the k-SUM problem is to find a subset of k integers that sums to 0. In the average-case version, the m elements are chosen uniformly at random from some interval [-u,u]. We consider the total setting where m is sufficiently large (with respect to u and k), so that we are guaranteed (with high probability) that solutions must exist. In particular, m = u^{Ω(1/k)} suffices for totality. Much of the appeal of k-SUM, in particular connections to problems in computational geometry, extends to the total setting. The best known algorithm in the average-case total setting is due to Wagner (following the approach of Blum-Kalai-Wasserman), and achieves a running time of u^{Θ(1/log k)} when m = u^{Θ(1/log k)}. This beats the known (conditional) lower bounds for worst-case k-SUM, raising the natural question of whether it can be improved even further. However, in this work, we show a matching average-case lower bound, by showing a reduction from worst-case lattice problems, thus introducing a new family of techniques into the field of fine-grained complexity. In particular, we show that any algorithm solving average-case k-SUM on m elements in time u^{o(1/log k)} will give a super-polynomial improvement in the complexity of algorithms for lattice problems.
Feedback for Dagstuhl Publishing