15 Search Results for "Ben-Amram, Amir M."


Document
Faster Treewidth-Based Approximations for Wiener Index

Authors: Giovanna Kobus Conrado, Amir Kafshdar Goharshady, Pavel Hudec, Pingjiang Li, and Harshit Jitendra Motwani

Published in: LIPIcs, Volume 301, 22nd International Symposium on Experimental Algorithms (SEA 2024)


Abstract
The Wiener index of a graph G is the sum of distances between all pairs of its vertices. It is a widely-used graph property in chemistry, initially introduced to examine the link between boiling points and structural properties of alkanes, which later found notable applications in drug design. Thus, computing or approximating the Wiener index of molecular graphs, i.e. graphs in which every vertex models an atom of a molecule and every edge models a bond, is of significant interest to the computational chemistry community. In this work, we build upon the observation that molecular graphs are sparse and tree-like and focus on developing efficient algorithms parameterized by treewidth to approximate the Wiener index. We present a new randomized approximation algorithm using a combination of tree decompositions and centroid decompositions. Our algorithm approximates the Wiener index within any desired multiplicative factor (1 ± ε) in time O(n ⋅ log n ⋅ k³ + √n ⋅ k/ε²), where n is the number of vertices of the graph and k is the treewidth. This time bound is almost-linear in n. Finally, we provide experimental results over standard benchmark molecules from PubChem and the Protein Data Bank, showcasing the applicability and scalability of our approach on real-world chemical graphs and comparing it with previous methods.

Cite as

Giovanna Kobus Conrado, Amir Kafshdar Goharshady, Pavel Hudec, Pingjiang Li, and Harshit Jitendra Motwani. Faster Treewidth-Based Approximations for Wiener Index. In 22nd International Symposium on Experimental Algorithms (SEA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 301, pp. 6:1-6:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{conrado_et_al:LIPIcs.SEA.2024.6,
  author =	{Conrado, Giovanna Kobus and Goharshady, Amir Kafshdar and Hudec, Pavel and Li, Pingjiang and Motwani, Harshit Jitendra},
  title =	{{Faster Treewidth-Based Approximations for Wiener Index}},
  booktitle =	{22nd International Symposium on Experimental Algorithms (SEA 2024)},
  pages =	{6:1--6:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-325-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{301},
  editor =	{Liberti, Leo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2024.6},
  URN =		{urn:nbn:de:0030-drops-203718},
  doi =		{10.4230/LIPIcs.SEA.2024.6},
  annote =	{Keywords: Computational Chemistry, Treewidth, Wiener Index}
}
Document
Track A: Algorithms, Complexity and Games
NP-Hardness of Testing Equivalence to Sparse Polynomials and to Constant-Support Polynomials

Authors: Omkar Baraskar, Agrim Dewan, Chandan Saha, and Pulkit Sinha

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
An s-sparse polynomial has at most s monomials with nonzero coefficients. The Equivalence Testing problem for sparse polynomials (ETsparse) asks to decide if a given polynomial f is equivalent to (i.e., in the orbit of) some s-sparse polynomial. In other words, given f ∈ 𝔽[𝐱] and s ∈ ℕ, ETsparse asks to check if there exist A ∈ GL(|𝐱|, 𝔽) and 𝐛 ∈ 𝔽^|𝐱| such that f(A𝐱 + 𝐛) is s-sparse. We show that ETsparse is NP-hard over any field 𝔽, if f is given in the sparse representation, i.e., as a list of nonzero coefficients and exponent vectors. This answers a question posed by Gupta, Saha and Thankey (SODA 2023) and also, more explicitly, by Baraskar, Dewan and Saha (STACS 2024). The result implies that the Minimum Circuit Size Problem (MCSP) is NP-hard for a dense subclass of depth-3 arithmetic circuits if the input is given in sparse representation. We also show that approximating the smallest s₀ such that a given s-sparse polynomial f is in the orbit of some s₀-sparse polynomial to within a factor of s^{1/3 - ε} is NP-hard for any ε > 0; observe that s-factor approximation is trivial as the input is s-sparse. Finally, we show that for any constant σ ≥ 6, checking if a polynomial (given in sparse representation) is in the orbit of some support-σ polynomial is NP-hard. Support of a polynomial f is the maximum number of variables present in any monomial of f. These results are obtained via direct reductions from the 3-SAT problem.

Cite as

Omkar Baraskar, Agrim Dewan, Chandan Saha, and Pulkit Sinha. NP-Hardness of Testing Equivalence to Sparse Polynomials and to Constant-Support Polynomials. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 16:1-16:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{baraskar_et_al:LIPIcs.ICALP.2024.16,
  author =	{Baraskar, Omkar and Dewan, Agrim and Saha, Chandan and Sinha, Pulkit},
  title =	{{NP-Hardness of Testing Equivalence to Sparse Polynomials and to Constant-Support Polynomials}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{16:1--16:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.16},
  URN =		{urn:nbn:de:0030-drops-201598},
  doi =		{10.4230/LIPIcs.ICALP.2024.16},
  annote =	{Keywords: Equivalence testing, MCSP, sparse polynomials, 3SAT}
}
Document
Track A: Algorithms, Complexity and Games
Simultaneously Approximating All 𝓁_p-Norms in Correlation Clustering

Authors: Sami Davies, Benjamin Moseley, and Heather Newman

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
This paper considers correlation clustering on unweighted complete graphs. We give a combinatorial algorithm that returns a single clustering solution that is simultaneously O(1)-approximate for all 𝓁_p-norms of the disagreement vector; in other words, a combinatorial O(1)-approximation of the all-norms objective for correlation clustering. This is the first proof that minimal sacrifice is needed in order to optimize different norms of the disagreement vector. In addition, our algorithm is the first combinatorial approximation algorithm for the 𝓁₂-norm objective, and more generally the first combinatorial algorithm for the 𝓁_p-norm objective when 1 < p < ∞. It is also faster than all previous algorithms that minimize the 𝓁_p-norm of the disagreement vector, with run-time O(n^ω), where O(n^ω) is the time for matrix multiplication on n × n matrices. When the maximum positive degree in the graph is at most Δ, this can be improved to a run-time of O(nΔ² log n).

Cite as

Sami Davies, Benjamin Moseley, and Heather Newman. Simultaneously Approximating All 𝓁_p-Norms in Correlation Clustering. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 52:1-52:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{davies_et_al:LIPIcs.ICALP.2024.52,
  author =	{Davies, Sami and Moseley, Benjamin and Newman, Heather},
  title =	{{Simultaneously Approximating All 𝓁\underlinep-Norms in Correlation Clustering}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{52:1--52:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.52},
  URN =		{urn:nbn:de:0030-drops-201950},
  doi =		{10.4230/LIPIcs.ICALP.2024.52},
  annote =	{Keywords: Approximation algorithms, correlation clustering, all-norms, lp-norms}
}
Document
Track A: Algorithms, Complexity and Games
Low-Memory Algorithms for Online Edge Coloring

Authors: Prantar Ghosh and Manuel Stoeckl

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
For edge coloring, the online and the W-streaming models seem somewhat orthogonal: the former needs edges to be assigned colors immediately after insertion, typically without any space restrictions, while the latter limits memory to be sublinear in the input size but allows an edge’s color to be announced any time after its insertion. We aim for the best of both worlds by designing small-space online algorithms for edge coloring. Our online algorithms significantly improve upon the memory used by prior ones while achieving an O(1)-competitive ratio. We study the problem under both (adversarial) edge arrivals and vertex arrivals. Under vertex arrivals of any n-node graph with maximum vertex-degree Δ, our online O(Δ)-coloring algorithm uses only semi-streaming space (i.e., Õ(n) space, where the Õ(.) notation hides polylog(n) factors). Under edge arrivals, we obtain an online O(Δ)-coloring in Õ(n√Δ) space. We also achieve a smooth color-space tradeoff: for any t = O(Δ), we get an O(Δt(log²Δ))-coloring in Õ(n√{Δ/t}) space, improving upon the state of the art that used Õ(nΔ/t) space for the same number of colors. The improvements stem from extensive use of random permutations that enable us to avoid previously used colors. Most of our algorithms can be derandomized and extended to multigraphs, where edge coloring is known to be considerably harder than for simple graphs.

Cite as

Prantar Ghosh and Manuel Stoeckl. Low-Memory Algorithms for Online Edge Coloring. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 71:1-71:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{ghosh_et_al:LIPIcs.ICALP.2024.71,
  author =	{Ghosh, Prantar and Stoeckl, Manuel},
  title =	{{Low-Memory Algorithms for Online Edge Coloring}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{71:1--71:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.71},
  URN =		{urn:nbn:de:0030-drops-202146},
  doi =		{10.4230/LIPIcs.ICALP.2024.71},
  annote =	{Keywords: Edge coloring, streaming model, online algorithms}
}
Document
Track A: Algorithms, Complexity and Games
Refuting Approaches to the Log-Rank Conjecture for XOR Functions

Authors: Hamed Hatami, Kaave Hosseini, Shachar Lovett, and Anthony Ostuni

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
The log-rank conjecture, a longstanding problem in communication complexity, has persistently eluded resolution for decades. Consequently, some recent efforts have focused on potential approaches for establishing the conjecture in the special case of XOR functions, where the communication matrix is lifted from a boolean function, and the rank of the matrix equals the Fourier sparsity of the function, which is the number of its nonzero Fourier coefficients. In this note, we refute two conjectures. The first has origins in Montanaro and Osborne (arXiv'09) and is considered in Tsang, Wong, Xie, and Zhang (FOCS'13), and the second is due to Mande and Sanyal (FSTTCS'20). These conjectures were proposed in order to improve the best-known bound of Lovett (STOC'14) regarding the log-rank conjecture in the special case of XOR functions. Both conjectures speculate that the set of nonzero Fourier coefficients of the boolean function has some strong additive structure. We refute these conjectures by constructing two specific boolean functions tailored to each.

Cite as

Hamed Hatami, Kaave Hosseini, Shachar Lovett, and Anthony Ostuni. Refuting Approaches to the Log-Rank Conjecture for XOR Functions. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 82:1-82:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hatami_et_al:LIPIcs.ICALP.2024.82,
  author =	{Hatami, Hamed and Hosseini, Kaave and Lovett, Shachar and Ostuni, Anthony},
  title =	{{Refuting Approaches to the Log-Rank Conjecture for XOR Functions}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{82:1--82:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.82},
  URN =		{urn:nbn:de:0030-drops-202252},
  doi =		{10.4230/LIPIcs.ICALP.2024.82},
  annote =	{Keywords: Communication complexity, log-rank conjecture, XOR functions, additive structure}
}
Document
Track A: Algorithms, Complexity and Games
No Polynomial Kernels for Knapsack

Authors: Klaus Heeger, Danny Hermelin, Matthias Mnich, and Dvir Shabtay

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
This paper focuses on kernelization algorithms for the fundamental Knapsack problem. A kernelization algorithm (or kernel) is a polynomial-time reduction from a problem onto itself, where the output size is bounded by a function of some problem-specific parameter. Such algorithms provide a theoretical model for data reduction and preprocessing and are central in the area of parameterized complexity. In this way, a kernel for Knapsack for some parameter k reduces any instance of Knapsack to an equivalent instance of size at most f(k) in polynomial time, for some computable function f. When f(k) = k^{O(1)} then we call such a reduction a polynomial kernel. Our study focuses on two natural parameters for Knapsack: The number w_# of different item weights, and the number p_# of different item profits. Our main technical contribution is a proof showing that Knapsack does not admit a polynomial kernel for any of these two parameters under standard complexity-theoretic assumptions. Our proof discovers an elaborate application of the standard kernelization lower bound framework, and develops along the way novel ideas that should be useful for other problems as well. We complement our lower bounds by showing that Knapsack admits a polynomial kernel for the combined parameter w_# ⋅ p_#.

Cite as

Klaus Heeger, Danny Hermelin, Matthias Mnich, and Dvir Shabtay. No Polynomial Kernels for Knapsack. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 83:1-83:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{heeger_et_al:LIPIcs.ICALP.2024.83,
  author =	{Heeger, Klaus and Hermelin, Danny and Mnich, Matthias and Shabtay, Dvir},
  title =	{{No Polynomial Kernels for Knapsack}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{83:1--83:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.83},
  URN =		{urn:nbn:de:0030-drops-202261},
  doi =		{10.4230/LIPIcs.ICALP.2024.83},
  annote =	{Keywords: Knapsack, polynomial kernels, compositions, number of different weights, number of different profits}
}
Document
Track A: Algorithms, Complexity and Games
Bounds on the Total Coefficient Size of Nullstellensatz Proofs of the Pigeonhole Principle

Authors: Aaron Potechin and Aaron Zhang

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
We show that the minimum total coefficient size of a Nullstellensatz proof of the pigeonhole principle on n+1 pigeons and n holes is 2^{Θ(n)}. We also investigate the ordering principle and construct an explicit Nullstellensatz proof for the ordering principle on n elements with total coefficient size 2ⁿ - n.

Cite as

Aaron Potechin and Aaron Zhang. Bounds on the Total Coefficient Size of Nullstellensatz Proofs of the Pigeonhole Principle. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 117:1-117:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{potechin_et_al:LIPIcs.ICALP.2024.117,
  author =	{Potechin, Aaron and Zhang, Aaron},
  title =	{{Bounds on the Total Coefficient Size of Nullstellensatz Proofs of the Pigeonhole Principle}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{117:1--117:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.117},
  URN =		{urn:nbn:de:0030-drops-202604},
  doi =		{10.4230/LIPIcs.ICALP.2024.117},
  annote =	{Keywords: Proof complexity, Nullstellensatz, pigeonhole principle, coefficient size}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
T-Rex: Termination of Recursive Functions Using Lexicographic Linear Combinations

Authors: Raphael Douglas Giles, Vincent Jackson, and Christine Rizkallah

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
We introduce a powerful termination algorithm for structurally recursive functions that improves on the core ideas behind lexicographic termination algorithms for functional programs. The algorithm generates linear-lexicographic combinations of primitive measure functions measuring the recursive structure of terms. We introduce a measure language that enables the simplification and comparison of measures and we prove meta-theoretic properties of our measure language. Moreover, we demonstrate our algorithm, on an untyped first-order functional language and prove its soundness and that it runs in polynomial time. We also provide a Haskell implementation. As part of this work, we also show how to solve the maximisation of negative vector-components as a linear program.

Cite as

Raphael Douglas Giles, Vincent Jackson, and Christine Rizkallah. T-Rex: Termination of Recursive Functions Using Lexicographic Linear Combinations. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 139:1-139:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{giles_et_al:LIPIcs.ICALP.2024.139,
  author =	{Giles, Raphael Douglas and Jackson, Vincent and Rizkallah, Christine},
  title =	{{T-Rex: Termination of Recursive Functions Using Lexicographic Linear Combinations}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{139:1--139:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.139},
  URN =		{urn:nbn:de:0030-drops-202827},
  doi =		{10.4230/LIPIcs.ICALP.2024.139},
  annote =	{Keywords: Termination, Recursive functions}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
The 2-Dimensional Constraint Loop Problem Is Decidable

Authors: Quentin Guilmant, Engel Lefaucheux, Joël Ouaknine, and James Worrell

Published in: LIPIcs, Volume 297, 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)


Abstract
A linear constraint loop is specified by a system of linear inequalities that define the relation between the values of the program variables before and after a single execution of the loop body. In this paper we consider the problem of determining whether such a loop terminates, i.e., whether all maximal executions are finite, regardless of how the loop is initialised and how the non-determinism in the loop body is resolved. We focus on the variant of the termination problem in which the loop variables range over ℝ. Our main result is that the termination problem is decidable over the reals in dimension 2. A more abstract formulation of our main result is that it is decidable whether a binary relation on ℝ² that is given as a conjunction of linear constraints is well-founded.

Cite as

Quentin Guilmant, Engel Lefaucheux, Joël Ouaknine, and James Worrell. The 2-Dimensional Constraint Loop Problem Is Decidable. In 51st International Colloquium on Automata, Languages, and Programming (ICALP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 297, pp. 140:1-140:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{guilmant_et_al:LIPIcs.ICALP.2024.140,
  author =	{Guilmant, Quentin and Lefaucheux, Engel and Ouaknine, Jo\"{e}l and Worrell, James},
  title =	{{The 2-Dimensional Constraint Loop Problem Is Decidable}},
  booktitle =	{51st International Colloquium on Automata, Languages, and Programming (ICALP 2024)},
  pages =	{140:1--140:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-322-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{297},
  editor =	{Bringmann, Karl and Grohe, Martin and Puppis, Gabriele and Svensson, Ola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2024.140},
  URN =		{urn:nbn:de:0030-drops-202831},
  doi =		{10.4230/LIPIcs.ICALP.2024.140},
  annote =	{Keywords: Linear Constraints Loops, Minkowski-Weyl, Convex Sets, Asymptotic Expansions}
}
Document
Current and Future Challenges in Knowledge Representation and Reasoning (Dagstuhl Perspectives Workshop 22282)

Authors: James P. Delgrande, Birte Glimm, Thomas Meyer, Miroslaw Truszczynski, and Frank Wolter

Published in: Dagstuhl Manifestos, Volume 10, Issue 1 (2024)


Abstract
Knowledge Representation and Reasoning is a central, longstanding, and active area of Artificial Intelligence. Over the years it has evolved significantly; more recently it has been challenged and complemented by research in areas such as machine learning and reasoning under uncertainty. In July 2022,sser a Dagstuhl Perspectives workshop was held on Knowledge Representation and Reasoning. The goal of the workshop was to describe the state of the art in the field, including its relation with other areas, its shortcomings and strengths, together with recommendations for future progress. We developed this manifesto based on the presentations, panels, working groups, and discussions that took place at the Dagstuhl Workshop. It is a declaration of our views on Knowledge Representation: its origins, goals, milestones, and current foci; its relation to other disciplines, especially to Artificial Intelligence; and on its challenges, along with key priorities for the next decade.

Cite as

James P. Delgrande, Birte Glimm, Thomas Meyer, Miroslaw Truszczynski, and Frank Wolter. Current and Future Challenges in Knowledge Representation and Reasoning (Dagstuhl Perspectives Workshop 22282). In Dagstuhl Manifestos, Volume 10, Issue 1, pp. 1-61, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{delgrande_et_al:DagMan.10.1.1,
  author =	{Delgrande, James P. and Glimm, Birte and Meyer, Thomas and Truszczynski, Miroslaw and Wolter, Frank},
  title =	{{Current and Future Challenges in Knowledge Representation and Reasoning (Dagstuhl Perspectives Workshop 22282)}},
  pages =	{1--61},
  journal =	{Dagstuhl Manifestos},
  ISSN =	{2193-2433},
  year =	{2024},
  volume =	{10},
  number =	{1},
  editor =	{Delgrande, James P. and Glimm, Birte and Meyer, Thomas and Truszczynski, Miroslaw and Wolter, Frank},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagMan.10.1.1},
  URN =		{urn:nbn:de:0030-drops-201403},
  doi =		{10.4230/DagMan.10.1.1},
  annote =	{Keywords: Knowledge representation and reasoning, Applications of logics, Declarative representations, Formal logic}
}
Document
Position
Grounding Stream Reasoning Research

Authors: Pieter Bonte, Jean-Paul Calbimonte, Daniel de Leng, Daniele Dell'Aglio, Emanuele Della Valle, Thomas Eiter, Federico Giannini, Fredrik Heintz, Konstantin Schekotihin, Danh Le-Phuoc, Alessandra Mileo, Patrik Schneider, Riccardo Tommasini, Jacopo Urbani, and Giacomo Ziffer

Published in: TGDK, Volume 2, Issue 1 (2024): Special Issue on Trends in Graph Data and Knowledge - Part 2. Transactions on Graph Data and Knowledge, Volume 2, Issue 1


Abstract
In the last decade, there has been a growing interest in applying AI technologies to implement complex data analytics over data streams. To this end, researchers in various fields have been organising a yearly event called the "Stream Reasoning Workshop" to share perspectives, challenges, and experiences around this topic. In this paper, the previous organisers of the workshops and other community members provide a summary of the main research results that have been discussed during the first six editions of the event. These results can be categorised into four main research areas: The first is concerned with the technological challenges related to handling large data streams. The second area aims at adapting and extending existing semantic technologies to data streams. The third and fourth areas focus on how to implement reasoning techniques, either considering deductive or inductive techniques, to extract new and valuable knowledge from the data in the stream. This summary is written not only to provide a crystallisation of the field, but also to point out distinctive traits of the stream reasoning community. Moreover, it also provides a foundation for future research by enumerating a list of use cases and open challenges, to stimulate others to join this exciting research area.

Cite as

Pieter Bonte, Jean-Paul Calbimonte, Daniel de Leng, Daniele Dell'Aglio, Emanuele Della Valle, Thomas Eiter, Federico Giannini, Fredrik Heintz, Konstantin Schekotihin, Danh Le-Phuoc, Alessandra Mileo, Patrik Schneider, Riccardo Tommasini, Jacopo Urbani, and Giacomo Ziffer. Grounding Stream Reasoning Research. In Special Issue on Trends in Graph Data and Knowledge - Part 2. Transactions on Graph Data and Knowledge (TGDK), Volume 2, Issue 1, pp. 2:1-2:47, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{bonte_et_al:TGDK.2.1.2,
  author =	{Bonte, Pieter and Calbimonte, Jean-Paul and de Leng, Daniel and Dell'Aglio, Daniele and Della Valle, Emanuele and Eiter, Thomas and Giannini, Federico and Heintz, Fredrik and Schekotihin, Konstantin and Le-Phuoc, Danh and Mileo, Alessandra and Schneider, Patrik and Tommasini, Riccardo and Urbani, Jacopo and Ziffer, Giacomo},
  title =	{{Grounding Stream Reasoning Research}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{2:1--2:47},
  ISSN =	{2942-7517},
  year =	{2024},
  volume =	{2},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.2.1.2},
  URN =		{urn:nbn:de:0030-drops-198597},
  doi =		{10.4230/TGDK.2.1.2},
  annote =	{Keywords: Stream Reasoning, Stream Processing, RDF streams, Streaming Linked Data, Continuous query processing, Temporal Logics, High-performance computing, Databases}
}
Document
Survey
Logics for Conceptual Data Modelling: A Review

Authors: Pablo R. Fillottrani and C. Maria Keet

Published in: TGDK, Volume 2, Issue 1 (2024): Special Issue on Trends in Graph Data and Knowledge - Part 2. Transactions on Graph Data and Knowledge, Volume 2, Issue 1


Abstract
Information modelling for databases and object-oriented information systems avails of conceptual data modelling languages such as EER and UML Class Diagrams. Many attempts exist to add logical rigour to them, for various reasons and with disparate strengths. In this paper we aim to provide a structured overview of the many efforts. We focus on aims, approaches to the formalisation, including key dimensions of choice points, popular logics used, and the main relevant reasoning services. We close with current challenges and research directions.

Cite as

Pablo R. Fillottrani and C. Maria Keet. Logics for Conceptual Data Modelling: A Review. In Special Issue on Trends in Graph Data and Knowledge - Part 2. Transactions on Graph Data and Knowledge (TGDK), Volume 2, Issue 1, pp. 4:1-4:30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{fillottrani_et_al:TGDK.2.1.4,
  author =	{Fillottrani, Pablo R. and Keet, C. Maria},
  title =	{{Logics for Conceptual Data Modelling: A Review}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{4:1--4:30},
  ISSN =	{2942-7517},
  year =	{2024},
  volume =	{2},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.2.1.4},
  URN =		{urn:nbn:de:0030-drops-198616},
  doi =		{10.4230/TGDK.2.1.4},
  annote =	{Keywords: Conceptual Data Modelling, EER, UML, Description Logics, OWL}
}
Document
A Polynomial Degree Bound on Equations for Non-Rigid Matrices and Small Linear Circuits

Authors: Mrinal Kumar and Ben Lee Volk

Published in: LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)


Abstract
We show that there is an equation of degree at most poly(n) for the (Zariski closure of the) set of the non-rigid matrices: that is, we show that for every large enough field 𝔽, there is a non-zero n²-variate polynomial P ∈ 𝔽[x_{1, 1}, …, x_{n, n}] of degree at most poly(n) such that every matrix M which can be written as a sum of a matrix of rank at most n/100 and a matrix of sparsity at most n²/100 satisfies P(M) = 0. This confirms a conjecture of Gesmundo, Hauenstein, Ikenmeyer and Landsberg [Fulvio Gesmundo et al., 2016] and improves the best upper bound known for this problem down from exp(n²) [Abhinav Kumar et al., 2014; Fulvio Gesmundo et al., 2016] to poly(n). We also show a similar polynomial degree bound for the (Zariski closure of the) set of all matrices M such that the linear transformation represented by M can be computed by an algebraic circuit with at most n²/200 edges (without any restriction on the depth). As far as we are aware, no such bound was known prior to this work when the depth of the circuits is unbounded. Our methods are elementary and short and rely on a polynomial map of Shpilka and Volkovich [Amir Shpilka and Ilya Volkovich, 2015] to construct low degree "universal" maps for non-rigid matrices and small linear circuits. Combining this construction with a simple dimension counting argument to show that any such polynomial map has a low degree annihilating polynomial completes the proof. As a corollary, we show that any derandomization of the polynomial identity testing problem will imply new circuit lower bounds. A similar (but incomparable) theorem was proved by Kabanets and Impagliazzo [Valentine Kabanets and Russell Impagliazzo, 2004].

Cite as

Mrinal Kumar and Ben Lee Volk. A Polynomial Degree Bound on Equations for Non-Rigid Matrices and Small Linear Circuits. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 9:1-9:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{kumar_et_al:LIPIcs.ITCS.2021.9,
  author =	{Kumar, Mrinal and Volk, Ben Lee},
  title =	{{A Polynomial Degree Bound on Equations for Non-Rigid Matrices and Small Linear Circuits}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{9:1--9:9},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.9},
  URN =		{urn:nbn:de:0030-drops-135486},
  doi =		{10.4230/LIPIcs.ITCS.2021.9},
  annote =	{Keywords: Rigid Matrices, Linear Circuits, Degree Bounds, Circuit Lower Bounds}
}
Document
An Interpretable Classification Method for Predicting Drug Resistance in M. Tuberculosis

Authors: Hooman Zabeti, Nick Dexter, Amir Hosein Safari, Nafiseh Sedaghat, Maxwell Libbrecht, and Leonid Chindelevitch

Published in: LIPIcs, Volume 172, 20th International Workshop on Algorithms in Bioinformatics (WABI 2020)


Abstract
Motivation: The prediction of drug resistance and the identification of its mechanisms in bacteria such as Mycobacterium tuberculosis, the etiological agent of tuberculosis, is a challenging problem. Modern methods based on testing against a catalogue of previously identified mutations often yield poor predictive performance. On the other hand, machine learning techniques have demonstrated high predictive accuracy, but many of them lack interpretability to aid in identifying specific mutations which lead to resistance. We propose a novel technique, inspired by the group testing problem and Boolean compressed sensing, which yields highly accurate predictions and interpretable results at the same time. Results: We develop a modified version of the Boolean compressed sensing problem for identifying drug resistance, and implement its formulation as an integer linear program. This allows us to characterize the predictive accuracy of the technique and select an appropriate metric to optimize. A simple adaptation of the problem also allows us to quantify the sensitivity-specificity trade-off of our model under different regimes. We test the predictive accuracy of our approach on a variety of commonly used antibiotics in treating tuberculosis and find that it has accuracy comparable to that of standard machine learning models and points to several genes with previously identified association to drug resistance.

Cite as

Hooman Zabeti, Nick Dexter, Amir Hosein Safari, Nafiseh Sedaghat, Maxwell Libbrecht, and Leonid Chindelevitch. An Interpretable Classification Method for Predicting Drug Resistance in M. Tuberculosis. In 20th International Workshop on Algorithms in Bioinformatics (WABI 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 172, pp. 2:1-2:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{zabeti_et_al:LIPIcs.WABI.2020.2,
  author =	{Zabeti, Hooman and Dexter, Nick and Safari, Amir Hosein and Sedaghat, Nafiseh and Libbrecht, Maxwell and Chindelevitch, Leonid},
  title =	{{An Interpretable Classification Method for Predicting Drug Resistance in M. Tuberculosis}},
  booktitle =	{20th International Workshop on Algorithms in Bioinformatics (WABI 2020)},
  pages =	{2:1--2:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-161-0},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{172},
  editor =	{Kingsford, Carl and Pisanti, Nadia},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2020.2},
  URN =		{urn:nbn:de:0030-drops-127911},
  doi =		{10.4230/LIPIcs.WABI.2020.2},
  annote =	{Keywords: Drug resistance, whole-genome sequencing, interpretable machine learning, integer linear programming, rule-based learning}
}
Document
Extended Abstract
Mortality of Iterated Piecewise Affine Functions over the Integers: Decidability and Complexity (Extended Abstract)

Authors: Amir M. Ben-Amram

Published in: LIPIcs, Volume 20, 30th International Symposium on Theoretical Aspects of Computer Science (STACS 2013)


Abstract
In the theory of discrete-time dynamical systems, one studies the limiting behaviour of processes defined by iterating a fixed function f over a given space. A much-studied case involves piecewise affine functions on R^n. Blondel et al. (2001) studied the decidability of questions such as mortality for such functions with rational coefficients. Mortality means that every trajectory includes a 0; if the iteration is seen as a loop while (x \ne 0) x := f(x), mortality means that the loop is guaranteed to terminate. Blondel et al. proved that the problems are undecidable when the dimension n of the state space is at least two. They assume that the variables range over the rationals; this is an essential assumption. From a program analysis (and discrete Computability) viewpoint, it would be more interesting to consider integer-valued variables. This paper establishes (un)decidability results for the integer setting. We show that also over integers, undecidability (moreover, Pi^0_2 completeness) begins at two dimensions. We further investigate the effect of several restrictions on the iterated functions. Specifically, we consider bounding the size of the partition defining f, and restricting the coefficients of the linear components. In the decidable cases, we give complexity results. The complexity is PTIME for affine functions, but for piecewise-affine ones it is PSPACE-complete. The undecidability proofs use some variants of the Collatz problem, which may be of independent interest.

Cite as

Amir M. Ben-Amram. Mortality of Iterated Piecewise Affine Functions over the Integers: Decidability and Complexity (Extended Abstract). In 30th International Symposium on Theoretical Aspects of Computer Science (STACS 2013). Leibniz International Proceedings in Informatics (LIPIcs), Volume 20, pp. 514-525, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{benamram:LIPIcs.STACS.2013.514,
  author =	{Ben-Amram, Amir M.},
  title =	{{Mortality of Iterated Piecewise Affine Functions over the Integers: Decidability and Complexity}},
  booktitle =	{30th International Symposium on Theoretical Aspects of Computer Science (STACS 2013)},
  pages =	{514--525},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-50-7},
  ISSN =	{1868-8969},
  year =	{2013},
  volume =	{20},
  editor =	{Portier, Natacha and Wilke, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2013.514},
  URN =		{urn:nbn:de:0030-drops-39615},
  doi =		{10.4230/LIPIcs.STACS.2013.514},
  annote =	{Keywords: discrete-time dynamical systems, termination, Collatz problem}
}
  • Refine by Author
  • 1 Baraskar, Omkar
  • 1 Ben-Amram, Amir M.
  • 1 Bonte, Pieter
  • 1 Calbimonte, Jean-Paul
  • 1 Chindelevitch, Leonid
  • Show More...

  • Refine by Classification
  • 2 Computing methodologies → Description logics
  • 2 Theory of computation → Algebraic complexity theory
  • 2 Theory of computation → Parameterized complexity and exact algorithms
  • 1 Applied computing
  • 1 Applied computing → Bioinformatics
  • Show More...

  • Refine by Keyword
  • 1 3SAT
  • 1 Applications of logics
  • 1 Approximation algorithms
  • 1 Asymptotic Expansions
  • 1 Circuit Lower Bounds
  • Show More...

  • Refine by Type
  • 15 document

  • Refine by Publication Year
  • 12 2024
  • 1 2013
  • 1 2020
  • 1 2021

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail