Search Results

Documents authored by Dilkina, Bistra


Document
Machine Learning Augmented Algorithms for Combinatorial Optimization Problems (Dagstuhl Seminar 24441)

Authors: Deepak Ajwani, Bistra Dilkina, Tias Guns, and Ulrich Carsten Meyer

Published in: Dagstuhl Reports, Volume 14, Issue 10 (2025)


Abstract
Combinatorial optimization problems are pervasive across critical domains, including business analytics, engineering, supply chain management, transportation, and bioinformatics. Many of these problems are NP-hard, posing significant challenges for even moderately sized instances. Moreover, even when polynomial-time algorithms exist, their practical implementation can be computationally expensive for real-world applications. This has driven decades of research across diverse fields, encompassing exact and approximation algorithms, parameterized algorithms, algorithm engineering, operations research, optimization solvers (such as mixed-integer linear programming and constraint programming solvers), and nature-inspired metaheuristics. Recently, there has been a surge in research exploring the synergistic integration of machine learning techniques with algorithmic insights and optimization solvers to enhance the scalability of solving combinatorial optimization problems. However, research efforts in this area are currently fragmented across several distinct communities, including those focused on "Learning to scale optimization solvers," "Algorithm Engineering," "Algorithms with predictions," and "Decision-focused learning." This seminar brought together researchers from these diverse communities, fostering a dialogue on effectively combining algorithm engineering techniques, optimization solvers, and machine learning to address these challenging problems. The seminar facilitated the development of a shared vocabulary, clarifying similarities and distinctions between concepts across different research areas. Furthermore, significant progress was made in identifying key research directions for the future advancement of this field. We anticipate that these outcomes will serve as a valuable roadmap for advancing this exciting research area.

Cite as

Deepak Ajwani, Bistra Dilkina, Tias Guns, and Ulrich Carsten Meyer. Machine Learning Augmented Algorithms for Combinatorial Optimization Problems (Dagstuhl Seminar 24441). In Dagstuhl Reports, Volume 14, Issue 10, pp. 76-100, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Article{ajwani_et_al:DagRep.14.10.76,
  author =	{Ajwani, Deepak and Dilkina, Bistra and Guns, Tias and Meyer, Ulrich Carsten},
  title =	{{Machine Learning Augmented Algorithms for Combinatorial Optimization Problems (Dagstuhl Seminar 24441)}},
  pages =	{76--100},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2025},
  volume =	{14},
  number =	{10},
  editor =	{Ajwani, Deepak and Dilkina, Bistra and Guns, Tias and Meyer, Ulrich Carsten},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.14.10.76},
  URN =		{urn:nbn:de:0030-drops-230216},
  doi =		{10.4230/DagRep.14.10.76},
  annote =	{Keywords: Algorithm Engineering, Combinatorial Optimization, Constraint Solvers, Machine Learning}
}
Document
Learning Lagrangian Multipliers for the Travelling Salesman Problem

Authors: Augustin Parjadis, Quentin Cappart, Bistra Dilkina, Aaron Ferber, and Louis-Martin Rousseau

Published in: LIPIcs, Volume 307, 30th International Conference on Principles and Practice of Constraint Programming (CP 2024)


Abstract
Lagrangian relaxation is a versatile mathematical technique employed to relax constraints in an optimization problem, enabling the generation of dual bounds to prove the optimality of feasible solutions and the design of efficient propagators in constraint programming (such as the weighted circuit constraint). However, the conventional process of deriving Lagrangian multipliers (e.g., using subgradient methods) is often computationally intensive, limiting its practicality for large-scale or time-sensitive problems. To address this challenge, we propose an innovative unsupervised learning approach that harnesses the capabilities of graph neural networks to exploit the problem structure, aiming to generate accurate Lagrangian multipliers efficiently. We apply this technique to the well-known Held-Karp Lagrangian relaxation for the traveling salesman problem. The core idea is to predict accurate Lagrangian multipliers and to employ them as a warm start for generating Held-Karp relaxation bounds. These bounds are subsequently utilized to enhance the filtering process carried out by branch-and-bound algorithms. In contrast to much of the existing literature, which primarily focuses on finding feasible solutions, our approach operates on the dual side, demonstrating that learning can also accelerate the proof of optimality. We conduct experiments across various distributions of the metric traveling salesman problem, considering instances with up to 200 cities. The results illustrate that our approach can improve the filtering level of the weighted circuit global constraint, reduce the optimality gap by a factor two for unsolved instances up to a timeout, and reduce the execution time for solved instances by 10%.

Cite as

Augustin Parjadis, Quentin Cappart, Bistra Dilkina, Aaron Ferber, and Louis-Martin Rousseau. Learning Lagrangian Multipliers for the Travelling Salesman Problem. In 30th International Conference on Principles and Practice of Constraint Programming (CP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 307, pp. 22:1-22:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{parjadis_et_al:LIPIcs.CP.2024.22,
  author =	{Parjadis, Augustin and Cappart, Quentin and Dilkina, Bistra and Ferber, Aaron and Rousseau, Louis-Martin},
  title =	{{Learning Lagrangian Multipliers for the Travelling Salesman Problem}},
  booktitle =	{30th International Conference on Principles and Practice of Constraint Programming (CP 2024)},
  pages =	{22:1--22:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-336-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{307},
  editor =	{Shaw, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2024.22},
  URN =		{urn:nbn:de:0030-drops-207076},
  doi =		{10.4230/LIPIcs.CP.2024.22},
  annote =	{Keywords: Lagrangian relaxation, unsupervised learning, graph neural network}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail