29 Search Results for "Gaspers, Serge"


Document
On Connections Between k-Coloring and Euclidean k-Means

Authors: Enver Aman, Karthik C. S., and Sharath Punna

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
In the Euclidean k-means problems we are given as input a set of n points in ℝ^d and the goal is to find a set of k points C ⊆ ℝ^d, so as to minimize the sum of the squared Euclidean distances from each point in P to its closest center in C. In this paper, we formally explore connections between the k-coloring problem on graphs and the Euclidean k-means problem. Our results are as follows: - For all k ≥ 3, we provide a simple reduction from the k-coloring problem on regular graphs to the Euclidean k-means problem. Moreover, our technique extends to enable a reduction from a structured max-cut problem (which may be considered as a partial 2-coloring problem) to the Euclidean 2-means problem. Thus, we have a simple and alternate proof of the NP-hardness of Euclidean 2-means problem. - In the other direction, we mimic the O(1.7297ⁿ) time algorithm of Williams [TCS'05] for the max-cut of problem on n vertices to obtain an algorithm for the Euclidean 2-means problem with the same runtime, improving on the naive exhaustive search running in 2ⁿ⋅ poly(n,d) time. - We prove similar results and connections as above for the Euclidean k-min-sum problem.

Cite as

Enver Aman, Karthik C. S., and Sharath Punna. On Connections Between k-Coloring and Euclidean k-Means. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 9:1-9:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{aman_et_al:LIPIcs.ESA.2024.9,
  author =	{Aman, Enver and Karthik C. S. and Punna, Sharath},
  title =	{{On Connections Between k-Coloring and Euclidean k-Means}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{9:1--9:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.9},
  URN =		{urn:nbn:de:0030-drops-210808},
  doi =		{10.4230/LIPIcs.ESA.2024.9},
  annote =	{Keywords: k-means, k-minsum, Euclidean space, fine-grained complexity}
}
Document
Bicriteria Approximation for Minimum Dilation Graph Augmentation

Authors: Kevin Buchin, Maike Buchin, Joachim Gudmundsson, and Sampson Wong

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
Spanner constructions focus on the initial design of the network. However, networks tend to improve over time. In this paper, we focus on the improvement step. Given a graph and a budget k, which k edges do we add to the graph to minimise its dilation? Gudmundsson and Wong [TALG'22] provided the first positive result for this problem, but their approximation factor is linear in k. Our main result is a (2 √[r]{2} k^{1/r},2r)-bicriteria approximation that runs in O(n³ log n) time, for all r ≥ 1. In other words, if t^* is the minimum dilation after adding any k edges to a graph, then our algorithm adds O(k^{1+1/r}) edges to the graph to obtain a dilation of 2rt^*. Moreover, our analysis of the algorithm is tight under the Erdős girth conjecture.

Cite as

Kevin Buchin, Maike Buchin, Joachim Gudmundsson, and Sampson Wong. Bicriteria Approximation for Minimum Dilation Graph Augmentation. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 36:1-36:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{buchin_et_al:LIPIcs.ESA.2024.36,
  author =	{Buchin, Kevin and Buchin, Maike and Gudmundsson, Joachim and Wong, Sampson},
  title =	{{Bicriteria Approximation for Minimum Dilation Graph Augmentation}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{36:1--36:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.36},
  URN =		{urn:nbn:de:0030-drops-211079},
  doi =		{10.4230/LIPIcs.ESA.2024.36},
  annote =	{Keywords: Greedy spanner, Graph augmentation}
}
Document
Hitting Meets Packing: How Hard Can It Be?

Authors: Jacob Focke, Fabian Frei, Shaohua Li, Dániel Marx, Philipp Schepper, Roohani Sharma, and Karol Węgrzycki

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We study a general family of problems that form a common generalization of classic hitting (also referred to as covering or transversal) and packing problems. An instance of 𝒳-HitPack asks: Can removing k (deletable) vertices of a graph G prevent us from packing 𝓁 vertex-disjoint objects of type 𝒳? This problem captures a spectrum of problems with standard hitting and packing on opposite ends. Our main motivating question is whether the combination 𝒳-HitPack can be significantly harder than these two base problems. Already for one particular choice of 𝒳, this question can be posed for many different complexity notions, leading to a large, so-far unexplored domain at the intersection of the areas of hitting and packing problems. At a high level, we present two case studies: (1) 𝒳 being all cycles, and (2) 𝒳 being all copies of a fixed graph H. In each, we explore the classical complexity as well as the parameterized complexity with the natural parameters k+𝓁 and treewidth. We observe that the combined problem can be drastically harder than the base problems: for cycles or for H being a connected graph on at least 3 vertices, the problem is Σ₂^𝖯-complete and requires double-exponential dependence on the treewidth of the graph (assuming the Exponential-Time Hypothesis). In contrast, the combined problem admits qualitatively similar running times as the base problems in some cases, although significant novel ideas are required. For 𝒳 being all cycles, we establish a 2^{poly(k+𝓁)}⋅ n^{𝒪(1)} algorithm using an involved branching method, for example. Also, for 𝒳 being all edges (i.e., H = K₂; this combines Vertex Cover and Maximum Matching) the problem can be solved in time 2^{poly(tw)}⋅ n^{𝒪(1)} on graphs of treewidth tw. The key step enabling this running time relies on a combinatorial bound obtained from an algebraic (linear delta-matroid) representation of possible matchings.

Cite as

Jacob Focke, Fabian Frei, Shaohua Li, Dániel Marx, Philipp Schepper, Roohani Sharma, and Karol Węgrzycki. Hitting Meets Packing: How Hard Can It Be?. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 55:1-55:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{focke_et_al:LIPIcs.ESA.2024.55,
  author =	{Focke, Jacob and Frei, Fabian and Li, Shaohua and Marx, D\'{a}niel and Schepper, Philipp and Sharma, Roohani and W\k{e}grzycki, Karol},
  title =	{{Hitting Meets Packing: How Hard Can It Be?}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{55:1--55:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.55},
  URN =		{urn:nbn:de:0030-drops-211261},
  doi =		{10.4230/LIPIcs.ESA.2024.55},
  annote =	{Keywords: Hitting, Packing, Covering, Parameterized Algorithms, Lower Bounds, Treewidth}
}
Document
Minimizing the Weighted Number of Tardy Jobs Is W[1]-Hard

Authors: Klaus Heeger and Danny Hermelin

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We consider the 1∣∣∑ w_jU_j problem, the problem of minimizing the weighted number of tardy jobs on a single machine. This problem is one of the most basic and fundamental problems in scheduling theory, with several different applications both in theory and practice. Using a reduction from the Multicolored Clique problem, we prove that 1∣∣∑ w_jU_j is W[1]-hard with respect to the number p_# of different processing times in the input, as well as with respect to the number w_# of different weights in the input. This, along with previous work, provides a complete picture for 1∣∣∑ w_jU_j from the perspective of parameterized complexity, as well as almost tight complexity bounds for the problem under the Exponential Time Hypothesis (ETH).

Cite as

Klaus Heeger and Danny Hermelin. Minimizing the Weighted Number of Tardy Jobs Is W[1]-Hard. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 68:1-68:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{heeger_et_al:LIPIcs.ESA.2024.68,
  author =	{Heeger, Klaus and Hermelin, Danny},
  title =	{{Minimizing the Weighted Number of Tardy Jobs Is W\lbrack1\rbrack-Hard}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{68:1--68:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.68},
  URN =		{urn:nbn:de:0030-drops-211392},
  doi =		{10.4230/LIPIcs.ESA.2024.68},
  annote =	{Keywords: single-machine scheduling, number of different weights, number of different processing times}
}
Document
A Lower Bound for Local Search Proportional Approval Voting

Authors: Sonja Kraiczy and Edith Elkind

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
Selecting k out of m items based on the preferences of n heterogeneous agents is a widely studied problem in algorithmic game theory. If agents have approval preferences over individual items and harmonic utility functions over bundles - an agent receives ∑_{j = 1}^t1/j utility if t of her approved items are selected - then welfare optimisation is captured by a voting rule known as Proportional Approval Voting (PAV). PAV also satisfies demanding fairness axioms. However, finding a winning set of items under PAV is NP-hard. In search of a tractable method with strong fairness guarantees, a bounded local search version of PAV was proposed [Aziz et al., 2018]. It proceeds by starting with an arbitrary size-k set W and, at each step, checking if there is a pair of candidates a ∈ W, b ̸ ∈ W such that swapping a and b increases the total welfare by at least ε; if yes, it performs the swap. Aziz et al. show that setting ε = n/(k²) ensures both the desired fairness guarantees and polynomial running time. However, they leave it open whether the algorithm converges in polynomial time if ε is very small (in particular, if we do not stop until there are no welfare-improving swaps). We resolve this open question, by showing that if ε can be arbitrarily small, the running time of this algorithm may be super-polynomial. Specifically, we prove a lower bound of Ω(k^{log k}) if improvements are chosen lexicographically. To complement our lower bound, we provide an empirical comparison of two variants of local search - better-response and best-response - on several real-life data sets and a variety of synthetic data sets. Our experiments indicate that, empirically, better response exhibits faster running time than best response.

Cite as

Sonja Kraiczy and Edith Elkind. A Lower Bound for Local Search Proportional Approval Voting. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 82:1-82:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{kraiczy_et_al:LIPIcs.ESA.2024.82,
  author =	{Kraiczy, Sonja and Elkind, Edith},
  title =	{{A Lower Bound for Local Search Proportional Approval Voting}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{82:1--82:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.82},
  URN =		{urn:nbn:de:0030-drops-211538},
  doi =		{10.4230/LIPIcs.ESA.2024.82},
  annote =	{Keywords: Computational Social Choice, Committee Elections, Local Search, Fairness}
}
Document
A Faster Algorithm for the 4-Coloring Problem

Authors: Pu Wu, Huanyu Gu, Huiqin Jiang, Zehui Shao, and Jin Xu

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
We explore the 4-coloring problem, a fundamental combinatorial NP-hard problem. Given a graph G, the 4-coloring problem asks whether there exists a function f from the vertex set of G to {1,2,3,4} such that f(u)≠ f(v) for each edge uv of G. Such function f is referred to as a 4-coloring of G. The fastest known algorithm for the 4-coloring problem, introduced by Fomin, Gaspers, and Saurabh (COCOON 2007), exhibits a time complexity of O(1.7272ⁿ) and exponential space. In this paper, we propose an enhanced algorithm for the 4-coloring problem with a time complexity of O(1.7159ⁿ) and polynomial space. Our algorithm is deterministic and built upon a novel method. Specifically, inspired by previous algorithmic approaches for the 4-coloring problem, such as the aforementioned O(1.7272ⁿ) time algorithm, we consider the instance (G,I,S), where G is a graph and I,S are subsets of its vertex set representing vertices colored with 1 and vertices unable to be colored with 1, respectively. For a given instance (G,I,S), we aim to determine the existence of a 4-coloring f of G such that f(v) = 1 for v ∈ I and f(v)≠ 1 for v ∈ S. Our key innovation lies in recognizing that, leveraging certain combinatorial properties, the instance (G,I,S) can be efficiently solved when G-I-S is a union of K₃’s and K₄’s (where K₃ and K₄ denote complete graphs with 3 and 4 vertices, respectively). The ability to efficiently solve instances (G,I,S), where G-I-S is comprised solely of K₃’s and K₄’s, enables us to devise a branching algorithm capable of efficiently addressing instances (G,I,S), where G-I-S is not a union of K₃’s and K₄’s (the other case). Based on this innovative method, we derive our final enhanced algorithm.

Cite as

Pu Wu, Huanyu Gu, Huiqin Jiang, Zehui Shao, and Jin Xu. A Faster Algorithm for the 4-Coloring Problem. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 103:1-103:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{wu_et_al:LIPIcs.ESA.2024.103,
  author =	{Wu, Pu and Gu, Huanyu and Jiang, Huiqin and Shao, Zehui and Xu, Jin},
  title =	{{A Faster Algorithm for the 4-Coloring Problem}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{103:1--103:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.103},
  URN =		{urn:nbn:de:0030-drops-211749},
  doi =		{10.4230/LIPIcs.ESA.2024.103},
  annote =	{Keywords: Graph coloring, Graph algorithms, Exact algorithms}
}
Document
Solving Directed Multiway Cut Faster Than 2ⁿ

Authors: Mingyu Xiao

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
In the Directed Multiway Cut problem, we are given a directed graph G = (V,E) and a subset T ⊆ V, called the terminal set. The aim is to find a minimum sized set S ⊆ V⧵ T, such that after deleting S, no directed path exists from one terminal to another terminal in the remaining graph. It has been an open question whether Directed Multiway Cut can be solved faster than the trivial running-time bound O^*(2^{|V|}). In this paper, we provide a positive answer to this question, presenting an algorithm with a running-time bound O(1.9967^{|V|}).

Cite as

Mingyu Xiao. Solving Directed Multiway Cut Faster Than 2ⁿ. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 104:1-104:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{xiao:LIPIcs.ESA.2024.104,
  author =	{Xiao, Mingyu},
  title =	{{Solving Directed Multiway Cut Faster Than 2ⁿ}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{104:1--104:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.104},
  URN =		{urn:nbn:de:0030-drops-211758},
  doi =		{10.4230/LIPIcs.ESA.2024.104},
  annote =	{Keywords: Exact Algorithms, Parameterized Algorithms, Directed Multiway Cut, Directed Multicut, Directed Graphs}
}
Document
APPROX
Competitive Query Minimization for Stable Matching with One-Sided Uncertainty

Authors: Evripidis Bampis, Konstantinos Dogeas, Thomas Erlebach, Nicole Megow, Jens Schlöter, and Amitabh Trehan

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
We study the two-sided stable matching problem with one-sided uncertainty for two sets of agents A and B, with equal cardinality. Initially, the preference lists of the agents in A are given but the preferences of the agents in B are unknown. An algorithm can make queries to reveal information about the preferences of the agents in B. We examine three query models: comparison queries, interviews, and set queries. Using competitive analysis, our aim is to design algorithms that minimize the number of queries required to solve the problem of finding a stable matching or verifying that a given matching is stable (or stable and optimal for the agents of one side). We present various upper and lower bounds on the best possible competitive ratio as well as results regarding the complexity of the offline problem of determining the optimal query set given full information.

Cite as

Evripidis Bampis, Konstantinos Dogeas, Thomas Erlebach, Nicole Megow, Jens Schlöter, and Amitabh Trehan. Competitive Query Minimization for Stable Matching with One-Sided Uncertainty. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 17:1-17:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bampis_et_al:LIPIcs.APPROX/RANDOM.2024.17,
  author =	{Bampis, Evripidis and Dogeas, Konstantinos and Erlebach, Thomas and Megow, Nicole and Schl\"{o}ter, Jens and Trehan, Amitabh},
  title =	{{Competitive Query Minimization for Stable Matching with One-Sided Uncertainty}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{17:1--17:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.17},
  URN =		{urn:nbn:de:0030-drops-210100},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.17},
  annote =	{Keywords: Matching under Preferences, Stable Marriage, Query-Competitive Algorithms, Uncertainty}
}
Document
ParLS-PBO: A Parallel Local Search Solver for Pseudo Boolean Optimization

Authors: Zhihan Chen, Peng Lin, Hao Hu, and Shaowei Cai

Published in: LIPIcs, Volume 307, 30th International Conference on Principles and Practice of Constraint Programming (CP 2024)


Abstract
As a broadly applied technique in numerous optimization problems, recently, local search has been employed to solve Pseudo-Boolean Optimization (PBO) problem. A representative local search solver for PBO is LS-PBO. In this paper, firstly, we improve LS-PBO by a dynamic scoring mechanism, which dynamically strikes a balance between score on hard constraints and score on the objective function. Moreover, on top of this improved LS-PBO, we develop the first parallel local search PBO solver. The main idea is to share good solutions among different threads to guide the search, by maintaining a pool of feasible solutions. For evaluating solutions when updating the pool, we propose a function that considers both the solution quality and the diversity of the pool. Furthermore, we calculate the polarity density in the pool to enhance the scoring function of local search. Our empirical experiments show clear benefits of the proposed parallel approach, making it competitive with the parallel version of the famous commercial solver Gurobi.

Cite as

Zhihan Chen, Peng Lin, Hao Hu, and Shaowei Cai. ParLS-PBO: A Parallel Local Search Solver for Pseudo Boolean Optimization. In 30th International Conference on Principles and Practice of Constraint Programming (CP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 307, pp. 5:1-5:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.CP.2024.5,
  author =	{Chen, Zhihan and Lin, Peng and Hu, Hao and Cai, Shaowei},
  title =	{{ParLS-PBO: A Parallel Local Search Solver for Pseudo Boolean Optimization}},
  booktitle =	{30th International Conference on Principles and Practice of Constraint Programming (CP 2024)},
  pages =	{5:1--5:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-336-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{307},
  editor =	{Shaw, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2024.5},
  URN =		{urn:nbn:de:0030-drops-206900},
  doi =		{10.4230/LIPIcs.CP.2024.5},
  annote =	{Keywords: Pseudo-Boolean Optimization, Parallel Solving, Local Search, Scoring Function, Solution Pool}
}
Document
Structure-Guided Local Improvement for Maximum Satisfiability

Authors: André Schidler and Stefan Szeider

Published in: LIPIcs, Volume 307, 30th International Conference on Principles and Practice of Constraint Programming (CP 2024)


Abstract
The enhanced performance of today’s MaxSAT solvers has elevated their appeal for many large-scale applications, notably in software analysis and computer-aided design. Our research delves into refining anytime MaxSAT solving by repeatedly identifying and solving with an exact solver smaller subinstances that are chosen based on the graphical structure of the instance. We investigate various strategies to pinpoint these subinstances. This structure-guided selection of subinstances provides an exact solver with a high potential for improving the current solution. Our exhaustive experimental analyses contrast our methodology as instantiated in our tool MaxSLIM with previous studies and benchmark it against leading-edge MaxSAT solvers.

Cite as

André Schidler and Stefan Szeider. Structure-Guided Local Improvement for Maximum Satisfiability. In 30th International Conference on Principles and Practice of Constraint Programming (CP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 307, pp. 26:1-26:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{schidler_et_al:LIPIcs.CP.2024.26,
  author =	{Schidler, Andr\'{e} and Szeider, Stefan},
  title =	{{Structure-Guided Local Improvement for Maximum Satisfiability}},
  booktitle =	{30th International Conference on Principles and Practice of Constraint Programming (CP 2024)},
  pages =	{26:1--26:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-336-2},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{307},
  editor =	{Shaw, Paul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2024.26},
  URN =		{urn:nbn:de:0030-drops-207112},
  doi =		{10.4230/LIPIcs.CP.2024.26},
  annote =	{Keywords: maximum satisfiability, large neighborhood search (LNS), SAT-based local improvement (SLIM), incomplete MaxSAT, graphical structure, metaheuristic}
}
Document
Parallel Clause Sharing Strategy Based on Graph Structure of SAT Problem

Authors: Yoichiro Iida, Tomohiro Sonobe, and Mary Inaba

Published in: LIPIcs, Volume 305, 27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024)


Abstract
Parallelization of SAT solvers is an important technique for improving solver performance. The selection of the learnt clauses to share among parallel workers is crucial for its efficiency. Literal block distance (LBD) is often used to evaluate the quality of clauses to select. We propose a new method, Parallel Clause sharing based on graph Structure (PaCS), to select good clauses for sharing. First, we conducted three preliminary experiments to assess the performance of LBD in parallel clause sharing: a performance comparison between the LBD and clause size, an analysis of the utilization of shared clauses, and a comparison of the LBD values of shared clauses at originating and receiving workers. These experiments indicate that the LBD may not be optimal for learnt clause sharing. We attribute the results to the LBD’s inherent dependency on decision trees. Each parallel worker has a unique decision tree; thus, a sharing clause that is good for its originating worker may not be good for others. Therefore, we propose PaCS, a search-independent method that uses the graph structure derived from the input CNF of SAT problems. PaCS evaluates clauses using their edges' weight in the variable incidence graph. Using the input CNF’s graph is effective for parallel clause sharing because it is the common input for all parallel workers. Furthermore, using edge weight can select clauses whose variables' Boolean values are more likely to be determined. Performance evaluation experiments demonstrate that our strategy outperforms LBD by 4% in the number of solved instances and by 12% in PAR-2. This study opens avenues for further improvements in parallel-solving strategies using the structure of SAT problems and reinterpretations of the quality of learnt clauses.

Cite as

Yoichiro Iida, Tomohiro Sonobe, and Mary Inaba. Parallel Clause Sharing Strategy Based on Graph Structure of SAT Problem. In 27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 305, pp. 17:1-17:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{iida_et_al:LIPIcs.SAT.2024.17,
  author =	{Iida, Yoichiro and Sonobe, Tomohiro and Inaba, Mary},
  title =	{{Parallel Clause Sharing Strategy Based on Graph Structure of SAT Problem}},
  booktitle =	{27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024)},
  pages =	{17:1--17:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-334-8},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{305},
  editor =	{Chakraborty, Supratik and Jiang, Jie-Hong Roland},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SAT.2024.17},
  URN =		{urn:nbn:de:0030-drops-205392},
  doi =		{10.4230/LIPIcs.SAT.2024.17},
  annote =	{Keywords: SAT Solver, Structure of SAT, Parallel application, Clause Learning}
}
Document
eSLIM: Circuit Minimization with SAT Based Local Improvement

Authors: Franz-Xaver Reichl, Friedrich Slivovsky, and Stefan Szeider

Published in: LIPIcs, Volume 305, 27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024)


Abstract
eSLIM is a tool for circuit minimization that utilizes Exact Synthesis and the SAT-based local improvement method (SLIM) to locally improve circuits. eSLIM improves upon the earlier prototype CIOPS that uses Quantified Boolean Formulas (QBF) to succinctly encode resynthesis of multi-output subcircuits subject to don't cares. This paper describes two improvements. First, it presents a purely propositional encoding based on a Boolean relation characterizing the input-output behavior of the subcircuit under don't cares. This allows the use of a SAT solver for resynthesis, substantially reducing running times when applied to functions from the IWLS 2023 competition, where eSLIM placed second. Second, it proposes circuit partitioning techniques in which don't cares for a subcircuit are captured only with respect to an enclosing window, rather than the entire circuit. Circuit partitioning trades completeness for efficiency, and successfully enables the application of exact synthesis to some of the largest circuits in the EPFL suite, leading to improvements over the current best implementation for several instances.

Cite as

Franz-Xaver Reichl, Friedrich Slivovsky, and Stefan Szeider. eSLIM: Circuit Minimization with SAT Based Local Improvement. In 27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 305, pp. 23:1-23:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{reichl_et_al:LIPIcs.SAT.2024.23,
  author =	{Reichl, Franz-Xaver and Slivovsky, Friedrich and Szeider, Stefan},
  title =	{{eSLIM: Circuit Minimization with SAT Based Local Improvement}},
  booktitle =	{27th International Conference on Theory and Applications of Satisfiability Testing (SAT 2024)},
  pages =	{23:1--23:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-334-8},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{305},
  editor =	{Chakraborty, Supratik and Jiang, Jie-Hong Roland},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SAT.2024.23},
  URN =		{urn:nbn:de:0030-drops-205458},
  doi =		{10.4230/LIPIcs.SAT.2024.23},
  annote =	{Keywords: QBF, Exact Synthesis, Circuit Minimization, SLIM}
}
Document
Local Enumeration and Majority Lower Bounds

Authors: Mohit Gurumukhani, Ramamohan Paturi, Pavel Pudlák, Michael Saks, and Navid Talebanfard

Published in: LIPIcs, Volume 300, 39th Computational Complexity Conference (CCC 2024)


Abstract
Depth-3 circuit lower bounds and k-SAT algorithms are intimately related; the state-of-the-art Σ^k_3-circuit lower bound (Or-And-Or circuits with bottom fan-in at most k) and the k-SAT algorithm of Paturi, Pudlák, Saks, and Zane (J. ACM'05) are based on the same combinatorial theorem regarding k-CNFs. In this paper we define a problem which reveals new interactions between the two, and suggests a concrete approach to significantly stronger circuit lower bounds and improved k-SAT algorithms. For a natural number k and a parameter t, we consider the Enum(k, t) problem defined as follows: given an n-variable k-CNF and an initial assignment α, output all satisfying assignments at Hamming distance t(n) of α, assuming that there are no satisfying assignments of Hamming distance less than t(n) of α. We observe that an upper bound b(n, k, t) on the complexity of Enum(k, t) simultaneously implies depth-3 circuit lower bounds and k-SAT algorithms: - Depth-3 circuits: Any Σ^k_3 circuit computing the Majority function has size at least binom(n,n/2)/b(n, k, n/2). - k-SAT: There exists an algorithm solving k-SAT in time O(∑_{t=1}^{n/2}b(n, k, t)). A simple construction shows that b(n, k, n/2) ≥ 2^{(1 - O(log(k)/k))n}. Thus, matching upper bounds for b(n, k, n/2) would imply a Σ^k_3-circuit lower bound of 2^Ω(log(k)n/k) and a k-SAT upper bound of 2^{(1 - Ω(log(k)/k))n}. The former yields an unrestricted depth-3 lower bound of 2^ω(√n) solving a long standing open problem, and the latter breaks the Super Strong Exponential Time Hypothesis. In this paper, we propose a randomized algorithm for Enum(k, t) and introduce new ideas to analyze it. We demonstrate the power of our ideas by considering the first non-trivial instance of the problem, i.e., Enum(3, n/2). We show that the expected running time of our algorithm is 1.598ⁿ, substantially improving on the trivial bound of 3^{n/2} ≃ 1.732ⁿ. This already improves Σ^3_3 lower bounds for Majority function to 1.251ⁿ. The previous bound was 1.154ⁿ which follows from the work of Håstad, Jukna, and Pudlák (Comput. Complex.'95). By restricting ourselves to monotone CNFs, Enum(k, t) immediately becomes a hypergraph Turán problem. Therefore our techniques might be of independent interest in extremal combinatorics.

Cite as

Mohit Gurumukhani, Ramamohan Paturi, Pavel Pudlák, Michael Saks, and Navid Talebanfard. Local Enumeration and Majority Lower Bounds. In 39th Computational Complexity Conference (CCC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 300, pp. 17:1-17:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{gurumukhani_et_al:LIPIcs.CCC.2024.17,
  author =	{Gurumukhani, Mohit and Paturi, Ramamohan and Pudl\'{a}k, Pavel and Saks, Michael and Talebanfard, Navid},
  title =	{{Local Enumeration and Majority Lower Bounds}},
  booktitle =	{39th Computational Complexity Conference (CCC 2024)},
  pages =	{17:1--17:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-331-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{300},
  editor =	{Santhanam, Rahul},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2024.17},
  URN =		{urn:nbn:de:0030-drops-204136},
  doi =		{10.4230/LIPIcs.CCC.2024.17},
  annote =	{Keywords: Depth 3 circuits, k-CNF satisfiability, Circuit lower bounds, Majority function}
}
Document
Track A: Algorithms, Complexity and Games
Solving Woeginger’s Hiking Problem: Wonderful Partitions in Anonymous Hedonic Games

Authors: Andrei Constantinescu, Pascal Lenzner, Rebecca Reiffenhäuser, Daniel Schmand, and Giovanna Varricchio

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
A decade ago, Gerhard Woeginger posed an open problem that became well-known as "Woeginger’s Hiking Problem": Consider a group of n people that want to go hiking; everyone expresses preferences over the size of their hiking group in the form of an interval between 1 and n. Is it possible to efficiently assign the n people to a set of hiking subgroups so that every person approves the size of their assigned subgroup? The problem is also known as efficiently deciding if an instance of an anonymous Hedonic Game with interval approval preferences admits a wonderful partition. We resolve the open problem in the affirmative by presenting an O(n⁵) time algorithm for Woeginger’s Hiking Problem. Our solution is based on employing a dynamic programming approach for a specific rectangle stabbing problem from computational geometry. Moreover, we propose natural, more demanding extensions of the problem, e.g., maximizing the number of satisfied participants and variants with single-peaked preferences, and show that they are also efficiently solvable. Last but not least, we employ our solution to efficiently compute a partition that maximizes the egalitarian welfare for anonymous single-peaked Hedonic Games.

Cite as

Andrei Constantinescu, Pascal Lenzner, Rebecca Reiffenhäuser, Daniel Schmand, and Giovanna Varricchio. Solving Woeginger’s Hiking Problem: Wonderful Partitions in Anonymous Hedonic Games. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 48:1-48:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{constantinescu_et_al:LIPIcs.ICALP.2024.48,
  author =	{Constantinescu, Andrei and Lenzner, Pascal and Reiffenh\"{a}user, Rebecca and Schmand, Daniel and Varricchio, Giovanna},
  title =	{{Solving Woeginger’s Hiking Problem: Wonderful Partitions in Anonymous Hedonic Games}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{48:1--48:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.48},
  URN =		{urn:nbn:de:0030-drops-201910},
  doi =		{10.4230/LIPIcs.ICALP.2024.48},
  annote =	{Keywords: Algorithmic Game Theory, Dynamic Programming, Anonymous Hedonic Games, Single-Peaked Preferences, Social Optimum, Wonderful Partitions}
}
Document
Track A: Algorithms, Complexity and Games
Quantum Algorithms for Graph Coloring and Other Partitioning, Covering, and Packing Problems

Authors: Serge Gaspers and Jerry Zirui Li

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
Let U be a universe on n elements, let k be a positive integer, and let ℱ be a family of (implicitly defined) subsets of U. We consider the problems of partitioning U into k sets from ℱ, covering U with k sets from ℱ, and packing k non-intersecting sets from ℱ into U. Classically, these problems can be solved via inclusion-exclusion in 2ⁿ n^O(1) time [Andreas Björklund et al., 2009]. Quantumly, there are faster algorithms for graph coloring with running time O(1.9140ⁿ) [Kazuya Shimizu and Ryuhei Mori, 2022] and for Set Cover with a small number of sets with running time O(1.7274ⁿ |ℱ|^O(1)) [Andris Ambainis et al., 2019]. In this paper, we give a quantum speedup for Set Partition, Set Cover, and Set Packing whenever there is a classical enumeration algorithm that lends itself to a quadratic quantum speedup, which, for any subinstance on a set X ⊆ U, enumerates at least one member of a k-partition, k-cover, or k-packing (if one exists) restricted to (or projected onto, in the case of k-cover) the set X in c^|X| n^O(1) time with c < 2. Our bounded-error quantum algorithm runs in time (2+c)^{n/2} n^O(1) for Set Partition, Set Cover, and Set Packing. It is obtained by combining three algorithms that have the best running time for some values of c. When c ≤ 1.147899, our algorithm is slightly faster than (2+c)^{n/2} n^O(1); when c approaches 1, it matches the O(1.7274ⁿ |ℱ|^O(1)) running time of [Andris Ambainis et al., 2019] for Set Cover when |ℱ| is subexponential in n. For covering, packing, and partitioning into maximal independent sets, maximal cliques, maximal bicliques, maximal cluster graphs, maximal triangle-free graphs, maximal cographs, maximal claw-free graphs, maximal trivially-perfect graphs, maximal threshold graphs, maximal split graphs, maximal line graphs, and maximal induced forests, we obtain bounded-error quantum algorithms with running times ranging from O(1.8554ⁿ) to O(1.9629ⁿ). Packing and covering by maximal induced matchings can be done quantumly in O(1.8934ⁿ) time. For Graph Coloring (covering with k maximal independent sets), we further improve the running time to O(1.7956ⁿ) by leveraging faster algorithms for coloring with a small number of colors to better balance our divide-and-conquer steps. For Domatic Number (packing k minimal dominating sets), we obtain a O((2-ε)ⁿ) running time for some ε > 0. Several of our results should be of interest to proponents of classical computing: - We present an inclusion-exclusion algorithm with running time O^*(∑_{i=0}^⌊αn⌋ binom(n,i)), which determines, for each X ⊆ U of size at most α n, 0 ≤ α ≤ 1, whether (X,ℱ) has a k-cover, k-partition, or k-packing. This running time is best-possible, up to polynomial factors. - We prove that for any linear-sized vertex subset X ⊆ V of a graph G = (V,E), the number of minimal dominating sets of G that are subsets of X is O((2-ε)^|X|) for some ε > 0.

Cite as

Serge Gaspers and Jerry Zirui Li. Quantum Algorithms for Graph Coloring and Other Partitioning, Covering, and Packing Problems. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 69:1-69:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{gaspers_et_al:LIPIcs.ICALP.2024.69,
  author =	{Gaspers, Serge and Li, Jerry Zirui},
  title =	{{Quantum Algorithms for Graph Coloring and Other Partitioning, Covering, and Packing Problems}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{69:1--69:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.69},
  URN =		{urn:nbn:de:0030-drops-202124},
  doi =		{10.4230/LIPIcs.ICALP.2024.69},
  annote =	{Keywords: Graph algorithms, quantum algorithms, graph coloring, domatic number, set cover, set partition, set packing}
}
  • Refine by Author
  • 13 Gaspers, Serge
  • 4 Szeider, Stefan
  • 3 Gudmundsson, Joachim
  • 3 Rümmele, Stefan
  • 2 Heeger, Klaus
  • Show More...

  • Refine by Classification
  • 5 Theory of computation → Parameterized complexity and exact algorithms
  • 2 Mathematics of computing → Combinatorial optimization
  • 2 Theory of computation → Automated reasoning
  • 2 Theory of computation → Discrete optimization
  • 1 Computing methodologies → Heuristic function construction
  • Show More...

  • Refine by Keyword
  • 3 Parameterized complexity
  • 2 Backdoor sets
  • 2 Graph algorithms
  • 2 Local Search
  • 2 Parameterized Algorithms
  • Show More...

  • Refine by Type
  • 29 document

  • Refine by Publication Year
  • 17 2024
  • 6 2017
  • 2 2019
  • 1 2009
  • 1 2013
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail