Learning Lagrangian Multipliers for the Travelling Salesman Problem

Authors Augustin Parjadis, Quentin Cappart , Bistra Dilkina , Aaron Ferber , Louis-Martin Rousseau



PDF
Thumbnail PDF

File

LIPIcs.CP.2024.22.pdf
  • Filesize: 0.93 MB
  • 18 pages

Document Identifiers

Author Details

Augustin Parjadis
  • Polytechnique Montréal, Canada
Quentin Cappart
  • Polytechnique Montréal, Canada
Bistra Dilkina
  • Center for Artificial Intelligence in Society, University of Southern California, Los Angeles, CA, USA
Aaron Ferber
  • Center for Artificial Intelligence in Society, University of Southern California, Los Angeles, CA, USA
Louis-Martin Rousseau
  • Polytechnique Montréal, Canada

Acknowledgements

We sincerely thank the anonymous reviewers for their constructive feedback. Their comments helped us better position our contribution within the field. Furthermore, their insights have provided guidance for our future research directions.

Cite AsGet BibTex

Augustin Parjadis, Quentin Cappart, Bistra Dilkina, Aaron Ferber, and Louis-Martin Rousseau. Learning Lagrangian Multipliers for the Travelling Salesman Problem. In 30th International Conference on Principles and Practice of Constraint Programming (CP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 307, pp. 22:1-22:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)
https://doi.org/10.4230/LIPIcs.CP.2024.22

Abstract

Lagrangian relaxation is a versatile mathematical technique employed to relax constraints in an optimization problem, enabling the generation of dual bounds to prove the optimality of feasible solutions and the design of efficient propagators in constraint programming (such as the weighted circuit constraint). However, the conventional process of deriving Lagrangian multipliers (e.g., using subgradient methods) is often computationally intensive, limiting its practicality for large-scale or time-sensitive problems. To address this challenge, we propose an innovative unsupervised learning approach that harnesses the capabilities of graph neural networks to exploit the problem structure, aiming to generate accurate Lagrangian multipliers efficiently. We apply this technique to the well-known Held-Karp Lagrangian relaxation for the traveling salesman problem. The core idea is to predict accurate Lagrangian multipliers and to employ them as a warm start for generating Held-Karp relaxation bounds. These bounds are subsequently utilized to enhance the filtering process carried out by branch-and-bound algorithms. In contrast to much of the existing literature, which primarily focuses on finding feasible solutions, our approach operates on the dual side, demonstrating that learning can also accelerate the proof of optimality. We conduct experiments across various distributions of the metric traveling salesman problem, considering instances with up to 200 cities. The results illustrate that our approach can improve the filtering level of the weighted circuit global constraint, reduce the optimality gap by a factor two for unsolved instances up to a timeout, and reduce the execution time for solved instances by 10%.

Subject Classification

ACM Subject Classification
  • Computing methodologies → Artificial intelligence
  • Theory of computation → Constraint and logic programming
  • Computing methodologies → Machine learning
Keywords
  • Lagrangian relaxation
  • unsupervised learning
  • graph neural network

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Ahmed Abbas and Paul Swoboda. DOGE-train: Discrete optimization on GPU with end-to-end training. Proceedings of the AAAI Conference on Artificial Intelligence, 38(18):20623-20631, 2024. Google Scholar
  2. Brandon Amos and J. Zico Kolter. OptNet: Differentiable optimization as a layer in neural networks. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML'17, pages 136-145. JMLR.org, 2017. Google Scholar
  3. David L. Applegate, Robert E. Bixby, Vašek Chvatál, and William J. Cook. The Traveling Salesman Problem: A Computational Study. Princeton University Press, 2006. URL: http://www.jstor.org/stable/j.ctt7s8xg.
  4. Nicolas Beldiceanu and Evelyne Contejean. Introducing global constraints in CHIP. Mathematical and computer Modelling, 20(12):97-123, 1994. Google Scholar
  5. Pascal Benchimol, Willem-Jan van Hoeve, Jean-Charles Régin, Louis-Martin Rousseau, and Michel Rueher. Improved filtering for weighted circuit constraints. Constraints, 17:205-233, 2012. Google Scholar
  6. Pascal Benchimol, Jean-Charles Régin, Louis-Martin Rousseau, Michel Rueher, and Willem-Jan Van Hoeve. Improving the Held and Karp approach with constraint programming. In International Conference on Integration of Artificial Intelligence (AI) and Operations Research (OR) Techniques in Constraint Programming, pages 40-44. Springer, 2010. Google Scholar
  7. Yoshua Bengio, Andrea Lodi, and Antoine Prouvost. Machine learning for combinatorial optimization: a methodological tour d’horizon. European Journal of Operational Research, 290(2):405-421, 2021. Google Scholar
  8. David Bergman, Andre A Cire, and Willem-Jan van Hoeve. Improved constraint propagation via lagrangian decomposition. In International Conference on Principles and Practice of Constraint Programming, pages 30-38. Springer, 2015. Google Scholar
  9. Frédéric Berthiaume and Claude-Guy Quimper. Local alterations of the lagrange multipliers for enhancing the filtering of the atmostnvalue constraint. In International Conference on the Integration of Constraint Programming, Artificial Intelligence, and Operations Research, pages 68-83. Springer, 2024. Google Scholar
  10. Timo Berthold. Measuring the impact of primal heuristics. Operations Research Letters, 41(6):611-614, 2013. Google Scholar
  11. Quentin Cappart, David Bergman, Louis-Martin Rousseau, Isabeau Prémont-Schwarz, and Augustin Parjadis. Improving variable orderings of approximate decision diagrams using reinforcement learning. INFORMS Journal on Computing, 34(5):2552-2570, 2022. Google Scholar
  12. Quentin Cappart, Didier Chételat, Elias B. Khalil, Andrea Lodi, Christopher Morris, and Petar Velickovic. Combinatorial optimization and reasoning with graph neural networks. Journal of Machine Learning Research, 24(130):1-61, 2023. URL: http://jmlr.org/papers/v24/21-0449.html.
  13. Quentin Cappart, Thierry Moisan, Louis-Martin Rousseau, Isabeau Prémont-Schwarz, and Andre A Cire. Combining reinforcement learning and constraint programming for combinatorial optimization. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 3677-3687, 2021. Google Scholar
  14. Martin C Cooper, Simon De Givry, Martı Sánchez, Thomas Schiex, Matthias Zytnicki, and Tomas Werner. Soft arc consistency revisited. Artificial Intelligence, 174(7-8):449-478, 2010. Google Scholar
  15. Yanchen Deng, Shufeng Kong, Caihua Liu, and Bo An. Deep attentive belief propagation: Integrating reasoning and learning for solving constraint optimization problems. Advances in Neural Information Processing Systems, 35:25436-25449, 2022. Google Scholar
  16. Michel Deudon, Pierre Cournut, Alexandre Lacoste, Yossiri Adulyasak, and Louis-Martin Rousseau. Learning heuristics for the TSP by policy gradient. In International Conference on the Integration of Constraint Programming, Artificial Intelligence, and Operations Research, pages 170-181. Springer, 2018. Google Scholar
  17. Vijay Prakash Dwivedi, Chaitanya K Joshi, Anh Tuan Luu, Thomas Laurent, Yoshua Bengio, and Xavier Bresson. Benchmarking graph neural networks. Journal of Machine Learning Research, 24(43):1-48, 2023. Google Scholar
  18. Adam N Elmachtoub and Paul Grigas. Smart “predict, then optimize”. Management Science, 68(1):9-26, 2022. Google Scholar
  19. Aaron Ferber, Bryan Wilder, Bistra Dilkina, and Milind Tambe. MIPaaL: Mixed integer program as a layer. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 1504-1511, 2020. Google Scholar
  20. Aaron M Ferber, Taoan Huang, Daochen Zha, Martin Schubert, Benoit Steiner, Bistra Dilkina, and Yuandong Tian. SurCo: Learning linear surrogates for combinatorial nonlinear optimization problems. In International Conference on Machine Learning, pages 10034-10052. PMLR, 2023. Google Scholar
  21. Matteo Fischetti and Paolo Toth. An additive bounding procedure for combinatorial optimization problems. Operations Research, 37(2):319-328, 1989. Google Scholar
  22. Maxime Gasse, Didier Chételat, Nicola Ferroni, Laurent Charlin, and Andrea Lodi. Exact combinatorial optimization with graph convolutional neural networks. Advances in Neural Information Processing Systems, 32, 2019. Google Scholar
  23. Minh Hoàng Hà, Claude-Guy Quimper, and Louis-Martin Rousseau. General bounding mechanism for constraint programs. In Principles and Practice of Constraint Programming: 21st International Conference, CP 2015, Cork, Ireland, August 31-September 4, 2015, Proceedings 21, pages 158-172. Springer, 2015. Google Scholar
  24. Aric Hagberg, Pieter Swart, and Daniel S Chult. Exploring network structure, dynamics, and function using NetworkX. Technical report, Los Alamos National Lab. Los Alamos, NM (United States), 2008. Google Scholar
  25. Michael Held and Richard M. Karp. The traveling-salesman problem and minimum spanning trees. Operations Research, 18(6):1138-1162, 1970. URL: http://www.jstor.org/stable/169411.
  26. Michael Held and Richard M. Karp. The traveling-salesman problem and minimum spanning trees: Part II. Mathematical Programming, 18(1):6-25, 1971. URL: https://doi.org/10.1007/BF01584070.
  27. Stefan Hougardy and Xianghui Zhong. Hard to solve instances of the euclidean traveling salesman problem. Mathematical Programming Computation, 13:51-74, 2021. Google Scholar
  28. Vinasetan Ratheil Houndji, Pierre Schaus, Mahouton Norbert Hounkonnou, and Laurence Wolsey. The weighted arborescence constraint. In International Conference on AI and OR Techniques in Constraint Programming for Combinatorial Optimization Problems, pages 185-201. Springer, 2017. Google Scholar
  29. Chaitanya K Joshi, Quentin Cappart, Louis-Martin Rousseau, and Thomas Laurent. Learning the travelling salesperson problem requires rethinking generalization. Constraints, 27(1-2):70-98, 2022. Google Scholar
  30. Elias Khalil, Hanjun Dai, Yuyu Zhang, Bistra Dilkina, and Le Song. Learning combinatorial optimization algorithms over graphs. In Advances in Neural Information Processing Systems, pages 6351-6361, 2017. Google Scholar
  31. Elias Khalil, Pierre Le Bodic, Le Song, George Nemhauser, and Bistra Dilkina. Learning to branch in mixed integer programming. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1), February 2016. URL: https://doi.org/10.1609/aaai.v30i1.10080.
  32. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Google Scholar
  33. Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations, 2017. URL: https://openreview.net/forum?id=SJU4ayYgl.
  34. Wouter Kool, Herke van Hoof, and Max Welling. Attention, learn to solve routing problems! In International Conference on Learning Representations, 2019. Google Scholar
  35. James Kotary, Ferdinando Fioretto, Pascal Van Hentenryck, and Bryan Wilder. End-to-end constrained optimization learning: A survey. In Zhi-Hua Zhou, editor, Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, pages 4475-4482. International Joint Conferences on Artificial Intelligence Organization, August 2021. Survey Track. URL: https://doi.org/10.24963/ijcai.2021/610.
  36. Jena-Lonis Lauriere. A language and a program for stating and solving combinatorial problems. Artificial intelligence, 10(1):29-127, 1978. Google Scholar
  37. Yang Li, Jinpei Guo, Runzhong Wang, and Junchi Yan. From distribution learning in training to gradient search in testing for combinatorial optimization. Advances in Neural Information Processing Systems, 36, 2024. Google Scholar
  38. Andrea Lodi and Giulia Zarpellon. On learning and branching: a survey. TOP, 25(2):207-236, 2017. Google Scholar
  39. Jayanta Mandi, Emir Demirovic, Peter J Stuckey, and Tias Guns. Smart predict-and-optimize for hard combinatorial optimization problems. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 1603-1610, 2020. Google Scholar
  40. Jayanta Mandi, James Kotary, Senne Berden, Maxime Mulamba, Victor Bucarey, Tias Guns, and Ferdinando Fioretto. Decision-focused learning: Foundations, state of the art, benchmark and future opportunities. arXiv preprint arXiv:2307.13565, 2023. Google Scholar
  41. Daniel Merchán, Jatin Arora, Julian Pachon, Karthik Konduri, Matthias Winkenbach, Steven Parks, and Joseph Noszek. 2021 Amazon last mile routing research challenge: Data set. Transportation Science, 2022. Google Scholar
  42. Mathias Niepert, Pasquale Minervini, and Luca Franceschi. Implicit MLE: backpropagating through discrete exponential family distributions. Advances in Neural Information Processing Systems, 34:14567-14579, 2021. Google Scholar
  43. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32, 2019. Google Scholar
  44. Marin Vlastelica Pogančić, Anselm Paulus, Vit Musil, Georg Martius, and Michal Rolinek. Differentiation of blackbox combinatorial solvers. In International Conference on Learning Representations, 2019. Google Scholar
  45. Gerhard Reinelt. TSPLIB — A traveling salesman problem library. ORSA journal on computing, 3(4):376-384, 1991. Google Scholar
  46. Utsav Sadana, Abhilash Chenreddy, Erick Delage, Alexandre Forel, Emma Frejinger, and Thibaut Vidal. A survey of contextual optimization methods for decision-making under uncertainty. European Journal of Operational Research, 2024. Google Scholar
  47. Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE transactions on neural networks, 20(1):61-80, 2008. Google Scholar
  48. Meinolf Sellmann. Theoretical foundations of CP-based lagrangian relaxation. In International Conference on Principles and Practice of Constraint Programming, pages 634-647. Springer, 2004. Google Scholar
  49. Daniel Selsam, Matthew Lamm, B Benedikt, Percy Liang, Leonardo de Moura, and David L Dill. Learning a SAT solver from single-bit supervision. In International Conference on Learning Representations, 2018. Google Scholar
  50. Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph Attention Networks. International Conference on Learning Representations, 2018. URL: https://openreview.net/forum?id=rJXMpikCZ.
  51. Minjie Wang, Da Zheng, Zihao Ye, Quan Gan, Mufei Li, Xiang Song, Jinjing Zhou, Chao Ma, Lingfan Yu, Yu Gai, Tianjun Xiao, Tong He, George Karypis, Jinyang Li, and Zheng Zhang. Deep graph library: A graph-centric, highly-performant package for graph neural networks. arXiv preprint arXiv:1909.01315, 2019. Google Scholar
  52. Po-Wei Wang, Priya Donti, Bryan Wilder, and Zico Kolter. SATNet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver. In International Conference on Machine Learning, pages 6545-6554. PMLR, 2019. Google Scholar
  53. Ziming Wang, Jun Chen, and Haopeng Chen. EGAT: Edge-featured graph attention network. In Igor Farkaš, Paolo Masulli, Sebastian Otte, and Stefan Wermter, editors, Artificial Neural Networks and Machine Learning - ICANN 2021, pages 253-264, Cham, 2021. Springer International Publishing. Google Scholar
  54. Tomas Werner. A linear programming approach to max-sum problem: A review. IEEE transactions on pattern analysis and machine intelligence, 29(7):1165-1179, 2007. Google Scholar
  55. Bryan Wilder. Melding the data-decisions pipeline: Decision-focused learning for combinatorial optimization. In Proceedings of the 33rd AAAI Conference on Artificial Intelligence, 2019. Google Scholar
  56. Liang Xin, Wen Song, Zhiguang Cao, and Jie Zhang. NeuroLKH: Combining deep learning model with lin-kernighan-helsgaun heuristic for solving the traveling salesman problem. Advances in Neural Information Processing Systems, 34:7472-7483, 2021. Google Scholar
  57. Kaan Yilmaz and Neil Yorke-Smith. A study of learning search approximation in mixed integer branch and bound: Node selection in SCIP. AI, 2(2):150-178, April 2021. URL: https://doi.org/10.3390/ai2020010.