Learning-Augmented Streaming Algorithms for Approximating MAX-CUT

Authors Yinhao Dong , Pan Peng , Ali Vakilian



PDF
Thumbnail PDF

File

LIPIcs.ITCS.2025.44.pdf
  • Filesize: 0.82 MB
  • 24 pages

Document Identifiers

Author Details

Yinhao Dong
  • School of Computer Science and Technology, University of Science and Technology of China, Hefei, China
Pan Peng
  • School of Computer Science and Technology, University of Science and Technology of China, Hefei, China
Ali Vakilian
  • Toyota Technological Institute at Chicago (TTIC), IL, USA

Cite As Get BibTex

Yinhao Dong, Pan Peng, and Ali Vakilian. Learning-Augmented Streaming Algorithms for Approximating MAX-CUT. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 44:1-44:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025) https://doi.org/10.4230/LIPIcs.ITCS.2025.44

Abstract

We study learning-augmented streaming algorithms for estimating the value of MAX-CUT in a graph. In the classical streaming model, while a 1/2-approximation for estimating the value of MAX-CUT can be trivially achieved with O(1) words of space, Kapralov and Krachun [STOC’19] showed that this is essentially the best possible: for any ε > 0, any (randomized) single-pass streaming algorithm that achieves an approximation ratio of at least 1/2 + ε requires Ω(n / 2^poly(1/ε)) space.
We show that it is possible to surpass the 1/2-approximation barrier using just O(1) words of space by leveraging a (machine learned) oracle. Specifically, we consider streaming algorithms that are equipped with an ε-accurate oracle that for each vertex in the graph, returns its correct label in {-1, +1}, corresponding to an optimal MAX-CUT solution in the graph, with some probability 1/2 + ε, and the incorrect label otherwise. 
Within this framework, we present a single-pass algorithm that approximates the value of MAX-CUT to within a factor of 1/2 + Ω(ε²) with probability at least 2/3 for insertion-only streams, using only poly(1/ε) words of space. We also extend our algorithm to fully dynamic streams while maintaining a space complexity of poly(1/ε,log n) words.

Subject Classification

ACM Subject Classification
  • Theory of computation → Streaming, sublinear and near linear time algorithms
Keywords
  • Learning-Augmented Algorithms
  • Graph Streaming Algorithms
  • MAX-CUT

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Anders Aamand, Justin Chen, Huy Nguyen, Sandeep Silwal, and Ali Vakilian. Improved frequency estimation algorithms with and without predictions. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems (NeurIPS), 2023. Google Scholar
  2. Anders Aamand, Piotr Indyk, and Ali Vakilian. frequency estimation algorithms under zipfian distribution. arXiv preprint, 2019. URL: https://arxiv.org/abs/1908.05198.
  3. Kook Jin Ahn, Sudipto Guha, and Andrew McGregor. Graph sketches: sparsification, spanners, and subgraphs. In Proceedings of the 31st ACM SIGMOD-SIGACT-SIGAI symposium on Principles of Database Systems (PODS), pages 5-14, 2012. URL: https://doi.org/10.1145/2213556.2213560.
  4. Spyros Angelopoulos, Christoph Dürr, Shendan Jin, Shahin Kamali, and Marc Renault. Online computation with untrusted advice. In 11th Innovations in Theoretical Computer Science Conference (ITCS), volume 151 of LIPIcs, pages 52:1-52:15, 2020. URL: https://doi.org/10.4230/LIPICS.ITCS.2020.52.
  5. Antonios Antoniadis, Christian Coester, Marek Eliás, Adam Polak, and Bertrand Simon. Online metric algorithms with untrusted predictions. ACM Trans. Algorithms, 19(2):19:1-19:34, 2023. URL: https://doi.org/10.1145/3582689.
  6. Antonios Antoniadis, Themis Gouleakis, Pieter Kleer, and Pavel Kolev. Secretary and online matching problems with machine learned advice. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems (NeurIPS), 2020. Google Scholar
  7. Etienne Bamas, Andreas Maggiori, and Ola Svensson. The primal-dual method for learning augmented algorithms. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems (NeurIPS), 2020. Google Scholar
  8. Siddhartha Banerjee, Vincent Cohen-Addad, Anupam Gupta, and Zhouzi Li. Graph searching with predictions. In 14th Innovations in Theoretical Computer Science Conference (ITCS), volume 251 of LIPIcs, pages 12:1-12:24, 2023. URL: https://doi.org/10.4230/LIPICS.ITCS.2023.12.
  9. Arnab Bhattacharyya and Yuichi Yoshida. Property Testing - Problems and Techniques. Springer, 2022. URL: https://doi.org/10.1007/978-981-16-8622-1.
  10. Jan van den Brand, Sebastian Forster, Yasamin Nazari, and Adam Polak. On dynamic graph algorithms with predictions. In Proceedings of the 2024 ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 3534-3557, 2024. URL: https://doi.org/10.1137/1.9781611977912.126.
  11. Vladimir Braverman, Prathamesh Dharangutte, Vihan Shah, and Chen Wang. Learning-augmented maximum independent set. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM), volume 317 of LIPIcs, pages 24:1-24:18, 2024. URL: https://doi.org/10.4230/LIPICS.APPROX/RANDOM.2024.24.
  12. Justin Chen, Sandeep Silwal, Ali Vakilian, and Fred Zhang. Faster fundamental graph algorithms via learned predictions. In International Conference on Machine Learning (ICML), volume 162 of Proceedings of Machine Learning Research, pages 3583-3602, 2022. URL: https://proceedings.mlr.press/v162/chen22v.html.
  13. Justin Y. Chen, Talya Eden, Piotr Indyk, Honghao Lin, Shyam Narayanan, Ronitt Rubinfeld, Sandeep Silwal, Tal Wagner, David P. Woodruff, and Michael Zhang. Triangle and four cycle counting with predictions in graph streams. In The Tenth International Conference on Learning Representations (ICLR), 2022. URL: https://openreview.net/forum?id=8in_5gN9I0.
  14. Vincent Cohen-Addad, Tommaso d'Orsi, Anupam Gupta, Euiwoong Lee, and Debmalya Panigrahi. Learning-augmented approximation algorithms for maximum cut and related problems. In Advances in Neural Information Processing Systems 37: Annual Conference on Neural Information Processing Systems (NeurIPS), 2024. Google Scholar
  15. Graham Cormode and Shan Muthukrishnan. An improved data stream summary: the count-min sketch and its applications. Journal of Algorithms, 55(1):58-75, 2005. URL: https://doi.org/10.1016/J.JALGOR.2003.12.001.
  16. Sami Davies, Benjamin Moseley, Sergei Vassilvitskii, and Yuyan Wang. Predictive flows for faster ford-fulkerson. In International Conference on Machine Learning (ICML), volume 202 of Proceedings of Machine Learning Research, pages 7231-7248, 2023. URL: https://proceedings.mlr.press/v202/davies23b.html.
  17. Adela F DePavia, Erasmo Tani, and Ali Vakilian. Learning-based algorithms for graph searching problems. In International Conference on Artificial Intelligence and Statistics (AISTATS), volume 238 of Proceedings of Machine Learning Research, pages 928-936, 2024. URL: https://proceedings.mlr.press/v238/depavia24a.html.
  18. Michael Dinitz, Sungjin Im, Thomas Lavastida, Benjamin Moseley, and Sergei Vassilvitskii. Faster matchings via learned duals. In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems (NeurIPS), pages 10393-10406, 2021. URL: https://proceedings.neurips.cc/paper/2021/hash/5616060fb8ae85d93f334e7267307664-Abstract.html.
  19. Talya Eden, Piotr Indyk, Shyam Narayanan, Ronitt Rubinfeld, Sandeep Silwal, and Tal Wagner. Learning-based support estimation in sublinear time. In 9th International Conference on Learning Representations (ICLR), 2021. URL: https://openreview.net/forum?id=tilovEHA3YS.
  20. Paolo Ferragina and Giorgio Vinciguerra. The pgm-index: a fully-dynamic compressed learned index with provable worst-case bounds. Proc. VLDB Endow., 13(8):1162-1175, 2020. URL: https://doi.org/10.14778/3389133.3389135.
  21. Suprovat Ghoshal, Konstantin Makarychev, and Yury Makarychev. Constraint satisfaction problems with advice. In Proceedings of the 2025 ACM-SIAM Symposium on Discrete Algorithms (SODA), 2025. Google Scholar
  22. Michel X Goemans and David P Williamson. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. Journal of the ACM (JACM), 42(6):1115-1145, 1995. URL: https://doi.org/10.1145/227683.227684.
  23. Monika Henzinger, Barna Saha, Martin P Seybold, and Christopher Ye. On the complexity of algorithms with predictions for dynamic graph problems. In 15th Innovations in Theoretical Computer Science Conference (ITCS), volume 287 of LIPIcs, pages 62:1-62:25, 2024. URL: https://doi.org/10.4230/LIPICS.ITCS.2024.62.
  24. Chen-Yu Hsu, Piotr Indyk, Dina Katabi, and Ali Vakilian. Learning-based frequency estimation algorithms. In 7th International Conference on Learning Representations (ICLR), 2019. Google Scholar
  25. Sungjin Im, Ravi Kumar, Mahshid Montazer Qaem, and Manish Purohit. Online knapsack with frequency predictions. In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems (NeurIPS), pages 2733-2743, 2021. URL: https://proceedings.neurips.cc/paper/2021/hash/161c5c5ad51fcc884157890511b3c8b0-Abstract.html.
  26. Piotr Indyk, Ali Vakilian, and Yang Yuan. Learning-based low-rank approximations. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems (NeurIPS), pages 7400-7410, 2019. URL: https://proceedings.neurips.cc/paper/2019/hash/1625abb8e458a79765c62009235e9d5b-Abstract.html.
  27. Tanqiu Jiang, Yi Li, Honghao Lin, Yisong Ruan, and David P Woodruff. Learning-augmented data stream algorithms. In 8th International Conference on Learning Representations (ICLR), 2020. Google Scholar
  28. Hossein Jowhari, Mert Sağlam, and Gábor Tardos. Tight bounds for lp samplers, finding duplicates in streams, and related problems. In Proceedings of the 30th ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems (PODS), pages 49-58, 2011. Google Scholar
  29. Michael Kapralov, Sanjeev Khanna, and Madhu Sudan. Streaming lower bounds for approximating max-cut. In Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1263-1282, 2015. URL: https://doi.org/10.1137/1.9781611973730.84.
  30. Michael Kapralov, Sanjeev Khanna, Madhu Sudan, and Ameya Velingker. (1+ ω(1))-approximation to max-cut requires linear space. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1703-1722, 2017. URL: https://doi.org/10.1137/1.9781611974782.112.
  31. Michael Kapralov and Dmitry Krachun. An optimal space lower bound for approximating MAX-CUT. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing (STOC), pages 277-288, 2019. URL: https://doi.org/10.1145/3313276.3316364.
  32. Michael Kapralov, Aida Mousavifar, Cameron Musco, Christopher Musco, Navid Nouri, Aaron Sidford, and Jakab Tardos. Fast and space efficient spectral sparsification in dynamic streams. In Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1814-1833, 2020. URL: https://doi.org/10.1137/1.9781611975994.111.
  33. Subhash Khot, Guy Kindler, Elchanan Mossel, and Ryan O’Donnell. Optimal inapproximability results for max-cut and other 2-variable csps? SIAM Journal on Computing, 37(1):319-357, 2007. URL: https://doi.org/10.1137/S0097539705447372.
  34. Dmitry Kogan and Robert Krauthgamer. Sketching cuts in graphs and hypergraphs. In Proceedings of the 2015 Conference on Innovations in Theoretical Computer Science (ITCS), pages 367-376, 2015. URL: https://doi.org/10.1145/2688073.2688093.
  35. Ravi Kumar, Manish Purohit, and Zoya Svitkina. Improving online algorithms via ml predictions. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems (NeurIPS), pages 9684-9693, 2018. URL: https://proceedings.neurips.cc/paper/2018/hash/73a427badebe0e32caa2e1fc7530b7f3-Abstract.html.
  36. Silvio Lattanzi, Thomas Lavastida, Benjamin Moseley, and Sergei Vassilvitskii. Online scheduling via learned weights. In Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1859-1877, 2020. URL: https://doi.org/10.1137/1.9781611975994.114.
  37. Silvio Lattanzi, Ola Svensson, and Sergei Vassilvitskii. Speeding up bellman ford via minimum violation permutations. In International Conference on Machine Learning (ICML), volume 202 of Proceedings of Machine Learning Research, pages 18584-18598, 2023. URL: https://proceedings.mlr.press/v202/lattanzi23a.html.
  38. Yi Li, Honghao Lin, Simin Liu, Ali Vakilian, and David P. Woodruff. Learning the positions in countsketch. In The Eleventh International Conference on Learning Representations (ICLR), 2023. Google Scholar
  39. Honghao Lin, Tian Luo, and David P. Woodruff. Learning augmented binary search trees. In International Conference on Machine Learning (ICML), volume 162 of Proceedings of Machine Learning Research, pages 13431-13440, 2022. URL: https://proceedings.mlr.press/v162/lin22f.html.
  40. Quanquan C Liu and Vaidehi Srinivas. The predicted-deletion dynamic model: Taking advantage of ml predictions, for free. arXiv preprint, 2023. URL: https://doi.org/10.48550/arXiv.2307.08890.
  41. Thodoris Lykouris and Sergei Vassilvitskii. Competitive caching with machine learned advice. J. ACM, 68(4):24:1-24:25, 2021. URL: https://doi.org/10.1145/3447579.
  42. Mohammad Mahdian, Hamid Nazerzadeh, and Amin Saberi. Allocating online advertisement space with unreliable estimates. In Proceedings of the 8th ACM conference on Electronic commerce, pages 288-294, 2007. URL: https://doi.org/10.1145/1250910.1250952.
  43. Michael Mitzenmacher. A model for learned bloom filters and optimizing by sandwiching. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems (NeurIPS), pages 462-471, 2018. URL: https://proceedings.neurips.cc/paper/2018/hash/0f49c89d1e7298bb9930789c8ed59d48-Abstract.html.
  44. Atsuki Sato and Yusuke Matsui. Fast partitioned learned bloom filter. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems (NeurIPS), 2023. Google Scholar
  45. Nicholas Schiefer, Justin Y Chen, Piotr Indyk, Shyam Narayanan, Sandeep Silwal, and Tal Wagner. Learned interpolation for better streaming quantile approximation with worst-case guarantees. In SIAM Conference on Applied and Computational Discrete Algorithms (ACDA), pages 87-97, 2023. URL: https://doi.org/10.1137/1.9781611977714.8.
  46. Kapil Vaidya, Eric Knorr, Michael Mitzenmacher, and Tim Kraska. Partitioned learned bloom filters. In 9th International Conference on Learning Representations (ICLR), 2021. Google Scholar
  47. Jeffrey S Vitter. Random sampling with a reservoir. ACM Transactions on Mathematical Software (TOMS), 11(1):37-57, 1985. URL: https://doi.org/10.1145/3147.3165.
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail