275 Search Results for "F�gger, Matthias"


Document
Online Simple Knapsack with Bounded Predictions

Authors: Matthias Gehnen, Henri Lotze, and Peter Rossmanith

Published in: LIPIcs, Volume 289, 41st International Symposium on Theoretical Aspects of Computer Science (STACS 2024)


Abstract
In the Online Simple Knapsack problem, an algorithm has to pack a knapsack of unit size as full as possible with items that arrive sequentially. The algorithm has no prior knowledge of the length or nature of the instance. Its performance is then measured against the best possible packing of all items of the same instance, over all possible instances. In the classical model for online computation, it is well known that there exists no constant bound for the ratio between the size of an optimal packing and the size of an online algorithm’s packing. A recent variation of the classical online model is that of predictions. In this model, an algorithm is given knowledge about the instance in advance, which is in reality distorted by some factor δ that is commonly unknown to the algorithm. The algorithm only learns about the actual nature of the elements of an input once they are revealed and an irrevocable and immediate decision has to be made. In this work, we study a slight variation of this model in which the error term, and thus the range of sizes that an announced item may actually lay in, is given to the algorithm in advance. It thus knows the range of sizes from which the actual size of each item is selected from. We find that the analysis of the Online Simple Knapsack problem under this model is surprisingly involved. For values of 0 < δ ≤ 1/7, we prove a tight competitive ratio of 2. From there on, we are able to prove that there are at least three alternating functions that describe the competitive ratio. We provide partially tight bounds for the whole range of 0 < δ < 1, showing in particular that the function of the competitive ratio depending on δ is not continuous.

Cite as

Matthias Gehnen, Henri Lotze, and Peter Rossmanith. Online Simple Knapsack with Bounded Predictions. In 41st International Symposium on Theoretical Aspects of Computer Science (STACS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 289, pp. 37:1-37:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{gehnen_et_al:LIPIcs.STACS.2024.37,
  author =	{Gehnen, Matthias and Lotze, Henri and Rossmanith, Peter},
  title =	{{Online Simple Knapsack with Bounded Predictions}},
  booktitle =	{41st International Symposium on Theoretical Aspects of Computer Science (STACS 2024)},
  pages =	{37:1--37:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-311-9},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{289},
  editor =	{Beyersdorff, Olaf and Kant\'{e}, Mamadou Moustapha and Kupferman, Orna and Lokshtanov, Daniel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2024.37},
  URN =		{urn:nbn:de:0030-drops-197476},
  doi =		{10.4230/LIPIcs.STACS.2024.37},
  annote =	{Keywords: Online problem, Simple Knapsack, Predictions, Machine-Learned Advice}
}
Document
FPT Approximation of Generalised Hypertree Width for Bounded Intersection Hypergraphs

Authors: Matthias Lanzinger and Igor Razgon

Published in: LIPIcs, Volume 289, 41st International Symposium on Theoretical Aspects of Computer Science (STACS 2024)


Abstract
Generalised hypertree width (ghw) is a hypergraph parameter that is central to the tractability of many prominent problems with natural hypergraph structure. Computing ghw of a hypergraph is notoriously hard. The decision version of the problem, checking whether ghw(H) ≤ k, is paraNP-hard when parameterised by k. Furthermore, approximation of ghw is at least as hard as approximation of Set-Cover, which is known to not admit any FPT approximation algorithms. Research in the computation of ghw so far has focused on identifying structural restrictions to hypergraphs - such as bounds on the size of edge intersections - that permit XP algorithms for ghw. Yet, even under these restrictions that problem has so far evaded any kind of FPT algorithm. In this paper we make the first step towards FPT algorithms for ghw by showing that the parameter can be approximated in FPT time for graphs of bounded edge intersection size. In concrete terms we show that there exists an FPT algorithm, parameterised by k and d, that for input hypergraph H with maximal cardinality of edge intersections d and integer k either outputs a tree decomposition with ghw(H) ≤ 4k(k+d+1)(2k-1), or rejects, in which case it is guaranteed that ghw(H) > k. Thus, in the special case of hypergraphs of bounded edge intersection, we obtain an FPT O(k³)-approximation algorithm for ghw.

Cite as

Matthias Lanzinger and Igor Razgon. FPT Approximation of Generalised Hypertree Width for Bounded Intersection Hypergraphs. In 41st International Symposium on Theoretical Aspects of Computer Science (STACS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 289, pp. 48:1-48:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{lanzinger_et_al:LIPIcs.STACS.2024.48,
  author =	{Lanzinger, Matthias and Razgon, Igor},
  title =	{{FPT Approximation of Generalised Hypertree Width for Bounded Intersection Hypergraphs}},
  booktitle =	{41st International Symposium on Theoretical Aspects of Computer Science (STACS 2024)},
  pages =	{48:1--48:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-311-9},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{289},
  editor =	{Beyersdorff, Olaf and Kant\'{e}, Mamadou Moustapha and Kupferman, Orna and Lokshtanov, Daniel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2024.48},
  URN =		{urn:nbn:de:0030-drops-197588},
  doi =		{10.4230/LIPIcs.STACS.2024.48},
  annote =	{Keywords: generalized hypertree width, hypergraphs, parameterized algorithms, approximation algorithms}
}
Document
Extended Abstract
Discreteness of Asymptotic Tensor Ranks (Extended Abstract)

Authors: Jop Briët, Matthias Christandl, Itai Leigh, Amir Shpilka, and Jeroen Zuiddam

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
Tensor parameters that are amortized or regularized over large tensor powers, often called "asymptotic" tensor parameters, play a central role in several areas including algebraic complexity theory (constructing fast matrix multiplication algorithms), quantum information (entanglement cost and distillable entanglement), and additive combinatorics (bounds on cap sets, sunflower-free sets, etc.). Examples are the asymptotic tensor rank, asymptotic slice rank and asymptotic subrank. Recent works (Costa-Dalai, Blatter-Draisma-Rupniewski, Christandl-Gesmundo-Zuiddam) have investigated notions of discreteness (no accumulation points) or "gaps" in the values of such tensor parameters. We prove a general discreteness theorem for asymptotic tensor parameters of order-three tensors and use this to prove that (1) over any finite field (and in fact any finite set of coefficients in any field), the asymptotic subrank and the asymptotic slice rank have no accumulation points, and (2) over the complex numbers, the asymptotic slice rank has no accumulation points. Central to our approach are two new general lower bounds on the asymptotic subrank of tensors, which measures how much a tensor can be diagonalized. The first lower bound says that the asymptotic subrank of any concise three-tensor is at least the cube-root of the smallest dimension. The second lower bound says that any concise three-tensor that is "narrow enough" (has one dimension much smaller than the other two) has maximal asymptotic subrank. Our proofs rely on new lower bounds on the maximum rank in matrix subspaces that are obtained by slicing a three-tensor in the three different directions. We prove that for any concise tensor, the product of any two such maximum ranks must be large, and as a consequence there are always two distinct directions with large max-rank.

Cite as

Jop Briët, Matthias Christandl, Itai Leigh, Amir Shpilka, and Jeroen Zuiddam. Discreteness of Asymptotic Tensor Ranks (Extended Abstract). In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 20:1-20:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{briet_et_al:LIPIcs.ITCS.2024.20,
  author =	{Bri\"{e}t, Jop and Christandl, Matthias and Leigh, Itai and Shpilka, Amir and Zuiddam, Jeroen},
  title =	{{Discreteness of Asymptotic Tensor Ranks}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{20:1--20:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.20},
  URN =		{urn:nbn:de:0030-drops-195483},
  doi =		{10.4230/LIPIcs.ITCS.2024.20},
  annote =	{Keywords: Tensors, Asymptotic rank, Subrank, Slice rank, Restriction, Degeneration, Diagonalization, SLOCC}
}
Document
Classical Verification of Quantum Learning

Authors: Matthias C. Caro, Marcel Hinsche, Marios Ioannou, Alexander Nietner, and Ryan Sweke

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
Quantum data access and quantum processing can make certain classically intractable learning tasks feasible. However, quantum capabilities will only be available to a select few in the near future. Thus, reliable schemes that allow classical clients to delegate learning to untrusted quantum servers are required to facilitate widespread access to quantum learning advantages. Building on a recently introduced framework of interactive proof systems for classical machine learning, we develop a framework for classical verification of quantum learning. We exhibit learning problems that a classical learner cannot efficiently solve on their own, but that they can efficiently and reliably solve when interacting with an untrusted quantum prover. Concretely, we consider the problems of agnostic learning parities and Fourier-sparse functions with respect to distributions with uniform input marginal. We propose a new quantum data access model that we call "mixture-of-superpositions" quantum examples, based on which we give efficient quantum learning algorithms for these tasks. Moreover, we prove that agnostic quantum parity and Fourier-sparse learning can be efficiently verified by a classical verifier with only random example or statistical query access. Finally, we showcase two general scenarios in learning and verification in which quantum mixture-of-superpositions examples do not lead to sample complexity improvements over classical data. Our results demonstrate that the potential power of quantum data for learning tasks, while not unlimited, can be utilized by classical agents through interaction with untrusted quantum entities.

Cite as

Matthias C. Caro, Marcel Hinsche, Marios Ioannou, Alexander Nietner, and Ryan Sweke. Classical Verification of Quantum Learning. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 24:1-24:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{caro_et_al:LIPIcs.ITCS.2024.24,
  author =	{Caro, Matthias C. and Hinsche, Marcel and Ioannou, Marios and Nietner, Alexander and Sweke, Ryan},
  title =	{{Classical Verification of Quantum Learning}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{24:1--24:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.24},
  URN =		{urn:nbn:de:0030-drops-195524},
  doi =		{10.4230/LIPIcs.ITCS.2024.24},
  annote =	{Keywords: computational learning theory, quantum learning theory, interactive proofs, quantum oracles, agnostic learning}
}
Document
Quantum Event Learning and Gentle Random Measurements

Authors: Adam Bene Watts and John Bostanci

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
We prove the expected disturbance caused to a quantum system by a sequence of randomly ordered two-outcome projective measurements is upper bounded by the square root of the probability that at least one measurement in the sequence accepts. We call this bound the Gentle Random Measurement Lemma. We then extend the techniques used to prove this lemma to develop protocols for problems in which we are given sample access to an unknown state ρ and asked to estimate properties of the accepting probabilities Tr[M_i ρ] of a set of measurements {M₁, M₂, … , M_m}. We call these types of problems Quantum Event Learning Problems. In particular, we show randomly ordering projective measurements solves the Quantum OR problem, answering an open question of Aaronson. We also give a Quantum OR protocol which works on non-projective measurements and which outperforms both the random measurement protocol analyzed in this paper and the protocol of Harrow, Lin, and Montanaro. However, this protocol requires a more complicated type of measurement, which we call a Blended Measurement. Given additional guarantees on the set of measurements {M₁, …, M_m}, we show the random and blended measurement Quantum OR protocols developed in this paper can also be used to find a measurement M_i such that Tr[M_i ρ] is large. We call the problem of finding such a measurement Quantum Event Finding. We also show Blended Measurements give a sample-efficient protocol for Quantum Mean Estimation: a problem in which the goal is to estimate the average accepting probability of a set of measurements on an unknown state. Finally we consider the Threshold Search Problem described by O'Donnell and Bădescu where, given given a set of measurements {M₁, …, M_m} along with sample access to an unknown state ρ satisfying Tr[M_i ρ] ≥ 1/2 for some M_i, the goal is to find a measurement M_j such that Tr[M_j ρ] ≥ 1/2 - ε. By building on our Quantum Event Finding result we show that randomly ordered (or blended) measurements can be used to solve this problem using O(log²(m) / ε²) copies of ρ. This matches the performance of the algorithm given by O'Donnell and Bădescu, but does not require injected noise in the measurements. Consequently, we obtain an algorithm for Shadow Tomography which matches the current best known sample complexity (i.e. requires Õ(log²(m)log(d)/ε⁴) samples). This algorithm does not require injected noise in the quantum measurements, but does require measurements to be made in a random order, and so is no longer online.

Cite as

Adam Bene Watts and John Bostanci. Quantum Event Learning and Gentle Random Measurements. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 97:1-97:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{watts_et_al:LIPIcs.ITCS.2024.97,
  author =	{Watts, Adam Bene and Bostanci, John},
  title =	{{Quantum Event Learning and Gentle Random Measurements}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{97:1--97:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.97},
  URN =		{urn:nbn:de:0030-drops-196254},
  doi =		{10.4230/LIPIcs.ITCS.2024.97},
  annote =	{Keywords: Event learning, gentle measurments, random measurements, quantum or, threshold search, shadow tomography}
}
Document
Cluster Editing with Overlapping Communities

Authors: Emmanuel Arrighi, Matthias Bentert, Pål Grønås Drange, Blair D. Sullivan, and Petra Wolf

Published in: LIPIcs, Volume 285, 18th International Symposium on Parameterized and Exact Computation (IPEC 2023)


Abstract
Cluster Editing, also known as correlation clustering, is a well-studied graph modification problem. In this problem, one is given a graph and allowed to perform up to k edge additions and deletions to transform it into a cluster graph, i.e., a graph consisting of a disjoint union of cliques. However, in real-world networks, clusters are often overlapping. For example, in social networks, a person might belong to several communities - e.g. those corresponding to work, school, or neighborhood. Another strong motivation comes from language networks where trying to cluster words with similar usage can be confounded by homonyms, that is, words with multiple meanings like "bat". The recently introduced operation of vertex splitting is one natural approach to incorporating such overlap into Cluster Editing. First used in the context of graph drawing, this operation allows a vertex v to be replaced by two vertices whose combined neighborhood is the neighborhood of v (and thus v can belong to more than one cluster). The problem of transforming a graph into a cluster graph using at most k edge additions, edge deletions, or vertex splits is called Cluster Editing with Vertex Splitting and is known to admit a polynomial kernel with respect to k and an O(9^{k²} + n + m)-time (parameterized) algorithm. However, it was not known whether the problem is NP-hard, a question which was originally asked by Abu-Khzam et al. [Combinatorial Optimization, 2018]. We answer this in the affirmative. We further give an improved algorithm running in O(2^{7klog k} + n + m) time.

Cite as

Emmanuel Arrighi, Matthias Bentert, Pål Grønås Drange, Blair D. Sullivan, and Petra Wolf. Cluster Editing with Overlapping Communities. In 18th International Symposium on Parameterized and Exact Computation (IPEC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 285, pp. 2:1-2:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{arrighi_et_al:LIPIcs.IPEC.2023.2,
  author =	{Arrighi, Emmanuel and Bentert, Matthias and Drange, P\r{a}l Gr{\o}n\r{a}s and Sullivan, Blair D. and Wolf, Petra},
  title =	{{Cluster Editing with Overlapping Communities}},
  booktitle =	{18th International Symposium on Parameterized and Exact Computation (IPEC 2023)},
  pages =	{2:1--2:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-305-8},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{285},
  editor =	{Misra, Neeldhara and Wahlstr\"{o}m, Magnus},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2023.2},
  URN =		{urn:nbn:de:0030-drops-194218},
  doi =		{10.4230/LIPIcs.IPEC.2023.2},
  annote =	{Keywords: graph modification, correlation clustering, vertex splitting, NP-hardness, parameterized algorithm}
}
Document
On the Complexity of Finding a Sparse Connected Spanning Subgraph in a Non-Uniform Failure Model

Authors: Matthias Bentert, Jannik Schestag, and Frank Sommer

Published in: LIPIcs, Volume 285, 18th International Symposium on Parameterized and Exact Computation (IPEC 2023)


Abstract
We study a generalization of the classic Spanning Tree problem that allows for a non-uniform failure model. More precisely, edges are either safe or unsafe and we assume that failures only affect unsafe edges. In Unweighted Flexible Graph Connectivity we are given an undirected graph G = (V,E) in which the edge set E is partitioned into a set S of safe edges and a set U of unsafe edges and the task is to find a set T of at most k edges such that T - {u} is connected and spans V for any unsafe edge u ∈ T. Unweighted Flexible Graph Connectivity generalizes both Spanning Tree and Hamiltonian Cycle. We study Unweighted Flexible Graph Connectivity in terms of fixed-parameter tractability (FPT). We show an almost complete dichotomy on which parameters lead to fixed-parameter tractability and which lead to hardness. To this end, we obtain FPT-time algorithms with respect to the vertex deletion distance to cluster graphs and with respect to the treewidth. By exploiting the close relationship to Hamiltonian Cycle, we show that FPT-time algorithms for many smaller parameters are unlikely under standard parameterized complexity assumptions. Regarding problem-specific parameters, we observe that Unweighted Flexible Graph Connectivity admits an FPT-time algorithm when parameterized by the number of unsafe edges. Furthermore, we investigate a below-upper-bound parameter for the number of edges of a solution. We show that this parameter also leads to an FPT-time algorithm.

Cite as

Matthias Bentert, Jannik Schestag, and Frank Sommer. On the Complexity of Finding a Sparse Connected Spanning Subgraph in a Non-Uniform Failure Model. In 18th International Symposium on Parameterized and Exact Computation (IPEC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 285, pp. 4:1-4:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{bentert_et_al:LIPIcs.IPEC.2023.4,
  author =	{Bentert, Matthias and Schestag, Jannik and Sommer, Frank},
  title =	{{On the Complexity of Finding a Sparse Connected Spanning Subgraph in a Non-Uniform Failure Model}},
  booktitle =	{18th International Symposium on Parameterized and Exact Computation (IPEC 2023)},
  pages =	{4:1--4:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-305-8},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{285},
  editor =	{Misra, Neeldhara and Wahlstr\"{o}m, Magnus},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2023.4},
  URN =		{urn:nbn:de:0030-drops-194232},
  doi =		{10.4230/LIPIcs.IPEC.2023.4},
  annote =	{Keywords: Flexible graph connectivity, NP-hard problem, parameterized complexity, below-guarantee parameterization, treewidth}
}
Document
New Support Size Bounds for Integer Programming, Applied to Makespan Minimization on Uniformly Related Machines

Authors: Sebastian Berndt, Hauke Brinkop, Klaus Jansen, Matthias Mnich, and Tobias Stamm

Published in: LIPIcs, Volume 283, 34th International Symposium on Algorithms and Computation (ISAAC 2023)


Abstract
Mixed-integer linear programming (MILP) is at the core of many advanced algorithms for solving fundamental problems in combinatorial optimization. The complexity of solving MILPs directly correlates with their support size, which is the minimum number of non-zero integer variables in an optimal solution. A hallmark result by Eisenbrand and Shmonin (Oper. Res. Lett. , 2006) shows that any feasible integer linear program (ILP) has a solution with support size s ≤ 2m⋅log(4mΔ), where m is the number of constraints, and Δ is the largest absolute coefficient in any constraint. Our main combinatorial result are improved support size bounds for ILPs. We show that any ILP has a solution with support size s ≤ m⋅(log(3A_max)+√{log(A_max)}), where A_max≔ ‖A‖₁ denotes the 1-norm of the constraint matrix A. Furthermore, we show support bounds in the linearized form s ≤ 2m⋅log(1.46 A_max). Our upper bounds also hold with A_max replaced by √mΔ, which improves on the previously best constants in the linearized form. Our main algorithmic result are the fastest known approximation schemes for fundamental scheduling problems, which use the improved support bounds as one ingredient. We design an efficient approximation scheme (EPTAS) for makespan minimization on uniformly related machines (Q||C_{max}). Our EPTAS yields a (1+ε)-approximation for Q||C_{max} on N jobs in time 2^𝒪(1/ε log³(1/ε)log(log(1/ε))) + 𝒪(N), which improves over the previously fastest algorithm by Jansen, Klein and Verschae (Math. Oper. Res., 2020) with run time 2^𝒪(1/ε log⁴(1/ε)) + N^𝒪(1). Arguably, our approximation scheme is also simpler than all previous EPTASes for Q||C_max, as we reduce the problem to a novel MILP formulation which greatly benefits from the small support.

Cite as

Sebastian Berndt, Hauke Brinkop, Klaus Jansen, Matthias Mnich, and Tobias Stamm. New Support Size Bounds for Integer Programming, Applied to Makespan Minimization on Uniformly Related Machines. In 34th International Symposium on Algorithms and Computation (ISAAC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 283, pp. 13:1-13:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{berndt_et_al:LIPIcs.ISAAC.2023.13,
  author =	{Berndt, Sebastian and Brinkop, Hauke and Jansen, Klaus and Mnich, Matthias and Stamm, Tobias},
  title =	{{New Support Size Bounds for Integer Programming, Applied to Makespan Minimization on Uniformly Related Machines}},
  booktitle =	{34th International Symposium on Algorithms and Computation (ISAAC 2023)},
  pages =	{13:1--13:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-289-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{283},
  editor =	{Iwata, Satoru and Kakimura, Naonori},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2023.13},
  URN =		{urn:nbn:de:0030-drops-193155},
  doi =		{10.4230/LIPIcs.ISAAC.2023.13},
  annote =	{Keywords: Integer programming, scheduling algorithms, uniformly related machines, makespan minimization}
}
Document
Approximation Guarantees for Shortest Superstrings: Simpler and Better

Authors: Matthias Englert, Nicolaos Matsakis, and Pavel Veselý

Published in: LIPIcs, Volume 283, 34th International Symposium on Algorithms and Computation (ISAAC 2023)


Abstract
The Shortest Superstring problem is an NP-hard problem, in which given as input a set of strings, we are looking for a string of minimum length that contains all input strings as substrings. The Greedy Conjecture (Tarhio and Ukkonen, 1988) states that the GREEDY algorithm, which repeatedly merges the two strings of maximum overlap, is 2-approximate. We have recently shown (STOC 2022) that the approximation guarantee of GREEDY is at most (13+√{57})/6 ≈ 3.425. Before that, the best established upper bound for this was 3.5 by Kaplan and Shafrir (IPL 2005), which improved upon the upper bound of 4 by Blum et al. (STOC 1991). To derive our previous result, we established two incomparable upper bounds on the overlap sum of all cycle-closing edges in an optimal cycle cover and utilized lemmas of Blum et al. We improve the more involved one of the two bounds and, at the same time, make its proof more straightforward. This results in an improved approximation guarantee of (√{67}+2)/3 ≈ 3.396 for GREEDY. Additionally, our result implies an algorithm for the Shortest Superstring problem having an approximation guarantee of (√{67}+14)/9 ≈ 2.466, improving slightly upon the previously best guarantee of (√{57}+37)/18 ≈ 2.475 (STOC 2022).

Cite as

Matthias Englert, Nicolaos Matsakis, and Pavel Veselý. Approximation Guarantees for Shortest Superstrings: Simpler and Better. In 34th International Symposium on Algorithms and Computation (ISAAC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 283, pp. 29:1-29:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{englert_et_al:LIPIcs.ISAAC.2023.29,
  author =	{Englert, Matthias and Matsakis, Nicolaos and Vesel\'{y}, Pavel},
  title =	{{Approximation Guarantees for Shortest Superstrings: Simpler and Better}},
  booktitle =	{34th International Symposium on Algorithms and Computation (ISAAC 2023)},
  pages =	{29:1--29:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-289-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{283},
  editor =	{Iwata, Satoru and Kakimura, Naonori},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2023.29},
  URN =		{urn:nbn:de:0030-drops-193319},
  doi =		{10.4230/LIPIcs.ISAAC.2023.29},
  annote =	{Keywords: Shortest Superstring problem, Approximation Algorithms}
}
Document
Invited Talk
Beyond Optimal Solutions for Real-World Problems (Invited Talk)

Authors: Maria Garcia de la Banda

Published in: LIPIcs, Volume 280, 29th International Conference on Principles and Practice of Constraint Programming (CP 2023)


Abstract
Combinatorial optimisation technology has come a long way. We now have mature high-level modelling languages in which to specify a model of the particular problem of interest [Nethercote et al., 2007; Frisch et al., 2008; Van Hentenryck, 1999; Fourer et al., 1990]; robust complete solvers in each major constraint paradigm, including Constraint Programming (CP), MaxSAT [Jessica Davies and Fahiem Bacchus, 2011; Alexey Ignatiev et al., 2019], and Mixed Integer Programming (MIP); effective incomplete search techniques that can easily be combined with complete solvers to speed up the search such as Large Neighbourhood Search [Paul Shaw, 1998]; and enough general knowledge about modelling techniques to understand the need for our models to incorporate components such as global constraints [Willem-Jan van Hoeve and Irit Katriel, 2006], symmetry constraints [Ian P. Gent et al., 2006], and more. All this has significantly reduced the amount of knowledge required to apply this technology successfully to the many different combinatorial optimisation problems that permeate our society. And yet, not many organisations use such advanced optimisation technology; instead, they often rely on the solutions provided by problem-specific algorithms that are implemented in traditional imperative languages and lack any of the above advances. Further, while advanced optimisation technology is particularly suitable for the kind of complex human-in-the-loop decision-making problems that occur in critical sectors of our society, including health, transport, energy, disaster management, environment and finance, these decisions are often still made by people with little or no technological support. In this extended abstract I argue that to change this state of affairs, our research focus needs to change from improving the technology on its own, to improving it so that users can better trust, use, and maintain the optimisation systems that we develop with it. The rest of this extended abstract discusses my personal experiences and opinion on these three points. Trust I highlight trust (which focuses on the user’s point of view) rather than trustworthiness (which is a characteristic of the software itself) because I think it is the former rather than the latter that is at stake for the adoption of optimisation technology. One of the biggest hurdles I have found for trust in the context of optimisation systems is for the domain experts to (feel like they) understand the underlying model. While many users will never do (or have to), I believe it is key for domain experts to have a high-level understanding of the constraints in the model, since their (dis)trust will likely spread through the organisation, impacting the adoption of the system. Thanks to the use of high-level modelling languages in CP, our group has achieved this [Matthias Klapperstueck et al., 2023] by documenting the constraints in a language the user knows (mathematics) and linking each constraint to the particular part of the model that implements it (via comments). While domain experts do not completely understand the model, the similarity between the format they understand (mathematics) and the model constraint has helped them verify our perception of their problem and improved their trust in the model. However, more needs to be done in this direction via the development of formal techniques. For example, our group is exploring the use of domain-specific languages [Hudak, 1997] as a bridge between domain experts and modellers that helps both trust and maintenance (see later). This [Sameela Suharshani Wijesundara et al., 2023] and other approaches need to be explored. A very significant source of trust for our domain experts (and of trustworthiness for the software) has been the development of two different models implemented by two different people for the same problem [Matthias Klapperstueck et al., 2023]. While this can be seen as a prohibitively expensive exercise, it did not take that long once the first model was mature, is a good way to onboard new optimisation team members, and has helped up detect not only bugs but also differences in the interpretation of domain expert information. For optimisation problems where it is not possible to verify the optimality (or even correctness) of the solution, we see such redundant modelling as the only solution for now. Interestingly, a significant step forward in obtaining the trust of our domain experts has been the generation of an optimality gap whenever an optimal solution could not be found due to time constraints. While explaining this concept took time, once understood it has boosted their trust, particularly when tackling problems where the solution is not easy verifiable or when approximated models/data are used (needed for speed, see later). This makes it difficult to work with CP and SAT solvers, as they usually lack tight lower bounds. Finally, trust is often developed through the use of the system, which I discuss below. Use Usability is known to be key for the deployment of software systems. By "system" in our context, I refer to the combination of the problem model(s), the associated solver(s) and, importantly, the User Interface (UI) that often integrates them and is fundamental to their success. In addition to the traditional usability characteristics of software systems, I believe an optimisation system requires particular care in the following areas. Interaction, i.e., the system must allow users to interact with the UI not only to provide and modify the input data, but also to modify the constraints (at the very least by turning some on/off) as well as explore and compare solutions, as argued in [David Meignan et al., 2015; Jie Liu et al., 2021]. Incremental compilers and solvers would significantly help in making this easier, as well as generic ways for the UIs to communicate with them. Conflict resolution, that is, ensuring the system can not only detect infeasible instances, but also support users in understanding the data/constraints that cause infeasibility and how to modify the instance to make it feasible. Any interactive optimisation system that has users, will likely have conflicts. Thus, it is mandatory for CP to improve its conflict resolution technology which, while existent [João Marques-Silva and Alessandro Previti, 2014; Lauffer and Topcu, 2019; Ilankaikone Senthooran et al., 2023], is not widespread and it is often still problem-dependent, overwhelming (in the number of constraints shown to the user) and slow. Without it, users will be "stumped" when (rather than if) infeasibility is reached. Solution diversity, that is, supporting users in obtaining a diverse set of (close-to-optimal) solutions, where diversity is measured by a user-provided metric modelled somehow. While some solver-independent technology has been developed and implemented for this [Emmanuel Hebrard et al., 2005; Thierry Petit and Andrew C. Trapp, 2015; Linnea Ingmar et al., 2020], it should be easier to use and more widespread. Further, it requires sophisticated solution comparison capabilities and, importantly, for optimal solutions to be found in seconds rather than hours. This brings me to speed, an area where CP solvers are falling behind. Most of our research group applications now use MIP solvers due to the need for floats (which precludes us from using learning solvers such as Chuffed [Geoffrey Chu, 2013]), but also to the lack of effective warm-start processes that are available in MIP solvers. Interestingly, data and model approximations have been proved to achieve orders of magnitude speedups with small reductions in optimality [Matthias Klapperstueck et al., 2023]. Developing generic (i.e., problem independent) accurate approximations would be extremely useful for complex decision systems. Other areas where I think generic CP methods are worth investigating more include dealing with uncertainty and online problems, ensuring solution fairness (even if it is over time), and studying predict + optimise approaches. Maintain I know very few papers devoted to the issue of maintenance in optimisation technology. While this may be due to my lack of knowledge, I suspect it is also due to the limited adoption of optimisation technology. While the issues in this area are again common to other software systems, I believe the solutions for CP require special attention. For example, the issue of changes in user requirements (that our research group calls problem drift) seems particularly prevalent in decision-making systems, as such problems can evolve rapidly due to unforeseen circumstances. This can make optimisation systems obsolete faster than expected. Our research group has proposed to tackle problem drift by developing a requirements model implemented in the above-mentioned MDSLs and created by both domain experts and modellers that, when modified re-generates parts of the model to support the modifications [Sameela Suharshani Wijesundara et al., 2023]. This and other approaches such as the creation of reusable models components [Sophia Saller and Jana Koehler, 2022; Toby Walsh, 2003], or instantiatable classes for common problem domains, are worth investigating.

Cite as

Maria Garcia de la Banda. Beyond Optimal Solutions for Real-World Problems (Invited Talk). In 29th International Conference on Principles and Practice of Constraint Programming (CP 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 280, pp. 1:1-1:4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{garciadelabanda:LIPIcs.CP.2023.1,
  author =	{Garcia de la Banda, Maria},
  title =	{{Beyond Optimal Solutions for Real-World Problems}},
  booktitle =	{29th International Conference on Principles and Practice of Constraint Programming (CP 2023)},
  pages =	{1:1--1:4},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-300-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{280},
  editor =	{Yap, Roland H. C.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2023.1},
  URN =		{urn:nbn:de:0030-drops-190384},
  doi =		{10.4230/LIPIcs.CP.2023.1},
  annote =	{Keywords: Combinatorial optimisation systems, usability, trust, maintenance}
}
Document
Exploring Hydrogen Supply/Demand Networks: Modeller and Domain Expert Views

Authors: Matthias Klapperstueck, Frits de Nijs, Ilankaikone Senthooran, Jack Lee-Kopij, Maria Garcia de la Banda, and Michael Wybrow

Published in: LIPIcs, Volume 280, 29th International Conference on Principles and Practice of Constraint Programming (CP 2023)


Abstract
Energy companies are considering producing renewable fuels such as hydrogen/ammonia. Setting up a production network means deciding where to build production plants, and how to operate them at minimum electricity and transport costs. These decisions are complicated by many factors including the difficulty in obtaining accurate current data (e.g., electricity price and transport costs) for potential supply locations, the accuracy of data predictions (e.g., for demand and costs), and the need for some decisions to be made due to external (not modelled) factors. Thus, decision-makers need access to a user-centric decision system that helps them visualise, explore, interact and compare the many possible solutions of many different scenarios. This paper describes the system we have built to support our energy partner in making such decisions, and shows the advantages of having a graphical user-focused interactive tool, and of using a high-level constraint modelling language (MiniZinc) to implement the underlying model.

Cite as

Matthias Klapperstueck, Frits de Nijs, Ilankaikone Senthooran, Jack Lee-Kopij, Maria Garcia de la Banda, and Michael Wybrow. Exploring Hydrogen Supply/Demand Networks: Modeller and Domain Expert Views. In 29th International Conference on Principles and Practice of Constraint Programming (CP 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 280, pp. 21:1-21:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{klapperstueck_et_al:LIPIcs.CP.2023.21,
  author =	{Klapperstueck, Matthias and de Nijs, Frits and Senthooran, Ilankaikone and Lee-Kopij, Jack and Garcia de la Banda, Maria and Wybrow, Michael},
  title =	{{Exploring Hydrogen Supply/Demand Networks: Modeller and Domain Expert Views}},
  booktitle =	{29th International Conference on Principles and Practice of Constraint Programming (CP 2023)},
  pages =	{21:1--21:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-300-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{280},
  editor =	{Yap, Roland H. C.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2023.21},
  URN =		{urn:nbn:de:0030-drops-190584},
  doi =		{10.4230/LIPIcs.CP.2023.21},
  annote =	{Keywords: Facility Location, Hydrogen Supply Chain, Human-Centric Optimisation}
}
Document
Extended Abstract
A Decomposition Framework for Inconsistency Handling in Qualitative Spatial and Temporal Reasoning (Extended Abstract)

Authors: Yakoub Salhi and Michael Sioutis

Published in: LIPIcs, Volume 278, 30th International Symposium on Temporal Representation and Reasoning (TIME 2023)


Abstract
Dealing with inconsistency is a central problem in AI, due to the fact that inconsistency can arise for many reasons in real-world applications, such as context dependency, multi-source information, vagueness, noisy data, etc. Among the approaches that are involved in inconsistency handling, we can mention argumentation, non-monotonic reasoning, and paraconsistency, e.g., see [Philippe Besnard and Anthony Hunter, 2008; Gerhard Brewka et al., 1997; Koji Tanaka et al., 2013]. In the work of [Yakoub Salhi and Michael Sioutis, 2023], we are interested in dealing with inconsistency in the context of Qualitative Spatio-Temporal Reasoning (QSTR) [Ligozat, 2013]. QSTR is an AI framework that aims to mimic, natural, human-like representation and reasoning regarding space and time. This framework is applied to a variety of domains, such as qualitative case-based reasoning and learning [Thiago Pedro Donadon Homem et al., 2020] and visual sensemaking [Jakob Suchan et al., 2021]; the interested reader is referred to [Michael Sioutis and Diedrich Wolter, 2021] for a recent survey. Motivation. In [Yakoub Salhi and Michael Sioutis, 2023], we study the decomposition of an inconsistent constraint network into consistent subnetworks under, possible, mandatory constraints. To illustrate the interest of such a decomposition, we provide a simple example described in Figure 1. The QCN depicted in the top part of the figure corresponds to a description of an inconsistent plan. Further, we assume that the constraint Task A {before} Task B is mandatory. To handle inconsistency, this plan can be transformed into a decomposition of two consistent plans, depicted in the bottom part of the figure; this decomposition can be used, e.g., to capture the fact that Task C must be performed twice. More generally, network decomposition can be involved in inconsistency handling in several ways: it can be used to identify potential contexts that explain the presence of inconsistent information; it can also be used to restore consistency through a compromise between the components of a decomposition, e.g., by using belief merging [Jean-François Condotta et al., 2010]; in addition, QCN decomposition can be used as the basis for defining inconsistency measures. Contributions. We summarize the contributions of [Yakoub Salhi and Michael Sioutis, 2023] as follows. First, we propose a theoretical study of a problem that consists in decomposing an inconsistent QCN into a bounded number of consistent QCNs that may satisfy a specified part in the original QCN; intuitively, the required common part corresponds to the constraints that are considered necessary, if any. To this end, we provide upper bounds for the minimum number of components in a decomposition as well as computational complexity results. Secondly, we provide two methods for solving our decomposition problem. The first method corresponds to a greedy constraint-based algorithm, a variant of which involves the use of spanning trees; the basic idea of this variant is that any acyclic constraint graph in QSTR is consistent, and such a graph can be used as a starting point for building consistent components. The second method corresponds to a SAT-based encoding; every model of this encoding is used to construct a valid decomposition. Thirdly, we consider two optimization versions of the initial decomposition problem that focus on minimizing the number of components and maximizing the similarity between components, respectively. The similarity between two QCNs is quantified by the number of common non-universal constraints; the interest in maximizing the similarity lies mainly in the fact that it reduces the number of constraints that allow each component to be distinguished from the rest. Of course, our previous methods are adapted to tackle these optimization versions, too. Additionally, we introduce two inconsistency measures based on QCN decomposition, which can be seen as counterparts of measures for propositional KBs introduced in [Matthias Thimm, 2016; Meriem Ammoura et al., 2017], and show that they satisfy several desired properties in the literature. Finally, we provide implementations of our methods for computing decompositions and experimentally evaluate them using different metrics.

Cite as

Yakoub Salhi and Michael Sioutis. A Decomposition Framework for Inconsistency Handling in Qualitative Spatial and Temporal Reasoning (Extended Abstract). In 30th International Symposium on Temporal Representation and Reasoning (TIME 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 278, pp. 16:1-16:3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{salhi_et_al:LIPIcs.TIME.2023.16,
  author =	{Salhi, Yakoub and Sioutis, Michael},
  title =	{{A Decomposition Framework for Inconsistency Handling in Qualitative Spatial and Temporal Reasoning}},
  booktitle =	{30th International Symposium on Temporal Representation and Reasoning (TIME 2023)},
  pages =	{16:1--16:3},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-298-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{278},
  editor =	{Artikis, Alexander and Bruse, Florian and Hunsberger, Luke},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.TIME.2023.16},
  URN =		{urn:nbn:de:0030-drops-191062},
  doi =		{10.4230/LIPIcs.TIME.2023.16},
  annote =	{Keywords: Spatial and Temporal Reasoning, Qualitative Constraints, Inconsistency Handling, Decomposition, Inconsistency Measures}
}
Document
Short Paper
Calculating Shadows with U-Nets for Urban Environments (Short Paper)

Authors: Dominik Rothschedl, Franz Welscher, Franziska Hübl, Ivan Majic, Daniele Giannandrea, Matthias Wastian, Johannes Scholz, and Niki Popper

Published in: LIPIcs, Volume 277, 12th International Conference on Geographic Information Science (GIScience 2023)


Abstract
Shadow calculation is an important prerequisite for many urban and environmental analyses such as the assessment of solar energy potential. We propose a neural net approach that can be trained with 3D geographical information and predict the presence and depth of shadows. We adapt a U-Net algorithm traditionally used in biomedical image segmentation and train it on sections of Styria, Austria. Our two-step approach first predicts binary existence of shadows and then estimates the depth of shadows as well. Our results on the case study of Styria, Austria show that the proposed approach can predict in both models shadows with over 80% accuracy which is satisfactory for real-world applications, but still leaves room for improvement.

Cite as

Dominik Rothschedl, Franz Welscher, Franziska Hübl, Ivan Majic, Daniele Giannandrea, Matthias Wastian, Johannes Scholz, and Niki Popper. Calculating Shadows with U-Nets for Urban Environments (Short Paper). In 12th International Conference on Geographic Information Science (GIScience 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 277, pp. 63:1-63:6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{rothschedl_et_al:LIPIcs.GIScience.2023.63,
  author =	{Rothschedl, Dominik and Welscher, Franz and H\"{u}bl, Franziska and Majic, Ivan and Giannandrea, Daniele and Wastian, Matthias and Scholz, Johannes and Popper, Niki},
  title =	{{Calculating Shadows with U-Nets for Urban Environments}},
  booktitle =	{12th International Conference on Geographic Information Science (GIScience 2023)},
  pages =	{63:1--63:6},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-288-4},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{277},
  editor =	{Beecham, Roger and Long, Jed A. and Smith, Dianna and Zhao, Qunshan and Wise, Sarah},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.GIScience.2023.63},
  URN =		{urn:nbn:de:0030-drops-189581},
  doi =		{10.4230/LIPIcs.GIScience.2023.63},
  annote =	{Keywords: Neural Net, U-Net, Residual Net, Shadow Calculation}
}
Document
Fully Polynomial-Time Algorithms Parameterized by Vertex Integrity Using Fast Matrix Multiplication

Authors: Matthias Bentert, Klaus Heeger, and Tomohiro Koana

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
We study the computational complexity of several polynomial-time-solvable graph problems parameterized by vertex integrity, a measure of a graph’s vulnerability to vertex removal in terms of connectivity. Vertex integrity is the smallest number ι such that there is a set S of ι' ≤ ι vertices such that every connected component of G-S contains at most ι-ι' vertices. It is known that the vertex integrity lies between the well-studied parameters vertex cover number and tree-depth. Our work follows similar studies for vertex cover number [Alon and Yuster, ESA 2007] and tree-depth [Iwata, Ogasawara, and Ohsaka, STACS 2018]. Alon and Yuster designed algorithms for graphs with small vertex cover number using fast matrix multiplications. We demonstrate that fast matrix multiplication can also be effectively used when parameterizing by vertex integrity ι by developing efficient algorithms for problems including an O(ι^{ω-1}n)-time algorithm for Maximum Matching and an O(ι^{(ω-1)/2}n²) ⊆ O(ι^{0.687} n²)-time algorithm for All-Pairs Shortest Paths. These algorithms can be faster than previous algorithms parameterized by tree-depth, for which fast matrix multiplication is not known to be effective.

Cite as

Matthias Bentert, Klaus Heeger, and Tomohiro Koana. Fully Polynomial-Time Algorithms Parameterized by Vertex Integrity Using Fast Matrix Multiplication. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 16:1-16:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{bentert_et_al:LIPIcs.ESA.2023.16,
  author =	{Bentert, Matthias and Heeger, Klaus and Koana, Tomohiro},
  title =	{{Fully Polynomial-Time Algorithms Parameterized by Vertex Integrity Using Fast Matrix Multiplication}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{16:1--16:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.16},
  URN =		{urn:nbn:de:0030-drops-186692},
  doi =		{10.4230/LIPIcs.ESA.2023.16},
  annote =	{Keywords: FPT in P, Algebraic Algorithms, Adaptive Algorithms, Subgraph Detection, Matching, APSP}
}
Document
Space-Efficient Parameterized Algorithms on Graphs of Low Shrubdepth

Authors: Benjamin Bergougnoux, Vera Chekan, Robert Ganian, Mamadou Moustapha Kanté, Matthias Mnich, Sang-il Oum, Michał Pilipczuk, and Erik Jan van Leeuwen

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
Dynamic programming on various graph decompositions is one of the most fundamental techniques used in parameterized complexity. Unfortunately, even if we consider concepts as simple as path or tree decompositions, such dynamic programming uses space that is exponential in the decomposition’s width, and there are good reasons to believe that this is necessary. However, it has been shown that in graphs of low treedepth it is possible to design algorithms which achieve polynomial space complexity without requiring worse time complexity than their counterparts working on tree decompositions of bounded width. Here, treedepth is a graph parameter that, intuitively speaking, takes into account both the depth and the width of a tree decomposition of the graph, rather than the width alone. Motivated by the above, we consider graphs that admit clique expressions with bounded depth and label count, or equivalently, graphs of low shrubdepth. Here, shrubdepth is a bounded-depth analogue of cliquewidth, in the same way as treedepth is a bounded-depth analogue of treewidth. We show that also in this setting, bounding the depth of the decomposition is a deciding factor for improving the space complexity. More precisely, we prove that on n-vertex graphs equipped with a tree-model (a decomposition notion underlying shrubdepth) of depth d and using k labels, - Independent Set can be solved in time 2^𝒪(dk) ⋅ n^𝒪(1) using 𝒪(dk²log n) space; - Max Cut can be solved in time n^𝒪(dk) using 𝒪(dk log n) space; and - Dominating Set can be solved in time 2^𝒪(dk) ⋅ n^𝒪(1) using n^𝒪(1) space via a randomized algorithm. We also establish a lower bound, conditional on a certain assumption about the complexity of Longest Common Subsequence, which shows that at least in the case of Independent Set the exponent of the parametric factor in the time complexity has to grow with d if one wishes to keep the space complexity polynomial.

Cite as

Benjamin Bergougnoux, Vera Chekan, Robert Ganian, Mamadou Moustapha Kanté, Matthias Mnich, Sang-il Oum, Michał Pilipczuk, and Erik Jan van Leeuwen. Space-Efficient Parameterized Algorithms on Graphs of Low Shrubdepth. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 18:1-18:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{bergougnoux_et_al:LIPIcs.ESA.2023.18,
  author =	{Bergougnoux, Benjamin and Chekan, Vera and Ganian, Robert and Kant\'{e}, Mamadou Moustapha and Mnich, Matthias and Oum, Sang-il and Pilipczuk, Micha{\l} and van Leeuwen, Erik Jan},
  title =	{{Space-Efficient Parameterized Algorithms on Graphs of Low Shrubdepth}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{18:1--18:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.18},
  URN =		{urn:nbn:de:0030-drops-186710},
  doi =		{10.4230/LIPIcs.ESA.2023.18},
  annote =	{Keywords: Parameterized complexity, shrubdepth, space complexity, algebraic methods}
}
  • Refine by Author
  • 21 Mnich, Matthias
  • 20 Müller-Hannemann, Matthias
  • 11 Jarke, Matthias
  • 9 Lyytinen, Kalle
  • 8 Schöbel, Anita
  • Show More...

  • Refine by Classification
  • 14 Applied computing → Transportation
  • 13 Mathematics of computing → Graph algorithms
  • 6 Theory of computation → Graph algorithms analysis
  • 6 Theory of computation → Parameterized complexity and exact algorithms
  • 5 Theory of computation → Algebraic complexity theory
  • Show More...

  • Refine by Keyword
  • 6 requirements engineering
  • 6 self-organization
  • 4 JavaScript
  • 4 Proxies
  • 4 Science of design
  • Show More...

  • Refine by Type
  • 275 document

  • Refine by Publication Year
  • 36 2006
  • 36 2021
  • 30 2009
  • 24 2005
  • 18 2019
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail