2 Search Results for "Kolb, Samuel"


Document
Learning MAX-SAT Models from Examples Using Genetic Algorithms and Knowledge Compilation

Authors: Senne Berden, Mohit Kumar, Samuel Kolb, and Tias Guns

Published in: LIPIcs, Volume 235, 28th International Conference on Principles and Practice of Constraint Programming (CP 2022)


Abstract
Many real-world problems can be effectively solved by means of combinatorial optimization. However, appropriate models to give to a solver are not always available, and sometimes must be learned from historical data. Although some research has been done in this area, the task of learning (weighted partial) MAX-SAT models has not received much attention thus far, even though such models can be used in many real-world applications. Furthermore, most existing work is limited to learning models from non-contextual data, where instances are labeled as solutions and non-solutions, but without any specification of the contexts in which those labels apply. A recent approach named hassle-sls has addressed these limitations: it can jointly learn hard constraints and weighted soft constraints from labeled contextual examples. However, it is hindered by long runtimes, as evaluating even a single candidate MAX-SAT model requires solving as many models as there are contexts in the training data, which quickly becomes highly expensive when the size of the model increases. In this work, we address these runtime issues. To this end, we make two contributions. First, we propose a faster model evaluation procedure that makes use of knowledge compilation. Second, we propose a genetic algorithm named hassle-gen that decreases the number of evaluations needed to find good models. We experimentally show that both contributions improve on the state of the art by speeding up learning, which in turn allows higher-quality MAX-SAT models to be found within a given learning time budget.

Cite as

Senne Berden, Mohit Kumar, Samuel Kolb, and Tias Guns. Learning MAX-SAT Models from Examples Using Genetic Algorithms and Knowledge Compilation. In 28th International Conference on Principles and Practice of Constraint Programming (CP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 235, pp. 8:1-8:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{berden_et_al:LIPIcs.CP.2022.8,
  author =	{Berden, Senne and Kumar, Mohit and Kolb, Samuel and Guns, Tias},
  title =	{{Learning MAX-SAT Models from Examples Using Genetic Algorithms and Knowledge Compilation}},
  booktitle =	{28th International Conference on Principles and Practice of Constraint Programming (CP 2022)},
  pages =	{8:1--8:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-240-2},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{235},
  editor =	{Solnon, Christine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2022.8},
  URN =		{urn:nbn:de:0030-drops-166373},
  doi =		{10.4230/LIPIcs.CP.2022.8},
  annote =	{Keywords: Machine learning, constraint learning, MAX-SAT}
}
Document
Learning Constraint Programming Models from Data Using Generate-And-Aggregate

Authors: Mohit Kumar, Samuel Kolb, and Tias Guns

Published in: LIPIcs, Volume 235, 28th International Conference on Principles and Practice of Constraint Programming (CP 2022)


Abstract
Constraint programming (CP) is used widely for solving real-world problems. However, designing these models require substantial expertise. In this paper, we tackle this problem by synthesizing models automatically from past solutions. We introduce COUNT-CP, which uses simple grammars and a generate-and-aggregate approach to learn expressive first-order constraints typically used in CP as well as their parameters from data. The learned constraints generalize across instances over different sizes and can be used to solve unseen instances - e.g., learning constraints from a 4×4 Sudoku to solve a 9×9 Sudoku or learning nurse staffing requirements across hospitals. COUNT-CP is implemented using the CPMpy constraint programming and modelling environment to produce constraints with nested mathematical expressions. The method is empirically evaluated on a set of suitable benchmark problems and shows to learn accurate and compact models quickly.

Cite as

Mohit Kumar, Samuel Kolb, and Tias Guns. Learning Constraint Programming Models from Data Using Generate-And-Aggregate. In 28th International Conference on Principles and Practice of Constraint Programming (CP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 235, pp. 29:1-29:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{kumar_et_al:LIPIcs.CP.2022.29,
  author =	{Kumar, Mohit and Kolb, Samuel and Guns, Tias},
  title =	{{Learning Constraint Programming Models from Data Using Generate-And-Aggregate}},
  booktitle =	{28th International Conference on Principles and Practice of Constraint Programming (CP 2022)},
  pages =	{29:1--29:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-240-2},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{235},
  editor =	{Solnon, Christine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2022.29},
  URN =		{urn:nbn:de:0030-drops-166580},
  doi =		{10.4230/LIPIcs.CP.2022.29},
  annote =	{Keywords: Constraint Learning, Constraint Programming, Model Synthesis}
}
  • Refine by Author
  • 2 Guns, Tias
  • 2 Kolb, Samuel
  • 2 Kumar, Mohit
  • 1 Berden, Senne

  • Refine by Classification
  • 1 Applied computing → Operations research
  • 1 Computing methodologies → Machine learning

  • Refine by Keyword
  • 1 Constraint Learning
  • 1 Constraint Programming
  • 1 MAX-SAT
  • 1 Machine learning
  • 1 Model Synthesis
  • Show More...

  • Refine by Type
  • 2 document

  • Refine by Publication Year
  • 2 2022

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail