6 Search Results for "Gasc�n, Adri�"


Document
Even Shorter Proofs Without New Variables

Authors: Adrián Rebola-Pardo

Published in: LIPIcs, Volume 271, 26th International Conference on Theory and Applications of Satisfiability Testing (SAT 2023)


Abstract
Proof formats for SAT solvers have diversified over the last decade, enabling new features such as extended resolution-like capabilities, very general extension-free rules, inclusion of proof hints, and pseudo-boolean reasoning. Interference-based methods have been proven effective, and some theoretical work has been undertaken to better explain their limits and semantics. In this work, we combine the subsumption redundancy notion from [Sam Buss and Neil Thapen, 2019] and the overwrite logic framework from [Adrián Rebola{-}Pardo and Martin Suda, 2018]. Natural generalizations then become apparent, enabling even shorter proofs of the pigeonhole principle (compared to those from [Marijn J. H. Heule et al., 2017]) and smaller unsatisfiable core generation.

Cite as

Adrián Rebola-Pardo. Even Shorter Proofs Without New Variables. In 26th International Conference on Theory and Applications of Satisfiability Testing (SAT 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 271, pp. 22:1-22:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{rebolapardo:LIPIcs.SAT.2023.22,
  author =	{Rebola-Pardo, Adri\'{a}n},
  title =	{{Even Shorter Proofs Without New Variables}},
  booktitle =	{26th International Conference on Theory and Applications of Satisfiability Testing (SAT 2023)},
  pages =	{22:1--22:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-286-0},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{271},
  editor =	{Mahajan, Meena and Slivovsky, Friedrich},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.SAT.2023.22},
  URN =		{urn:nbn:de:0030-drops-184844},
  doi =		{10.4230/LIPIcs.SAT.2023.22},
  annote =	{Keywords: Interference, SAT solving, Unsatisfiability proofs, Unsatisfiable cores}
}
Document
Prefix-Free Parsing for Building Large Tunnelled Wheeler Graphs

Authors: Adrián Goga and Andrej Baláž

Published in: LIPIcs, Volume 242, 22nd International Workshop on Algorithms in Bioinformatics (WABI 2022)


Abstract
We propose a new technique for creating a space-efficient index for large repetitive text collections, such as pangenomic databases containing sequences of many individuals from the same species. We combine two recent techniques from this area: Wheeler graphs (Gagie et al., 2017) and prefix-free parsing (PFP, Boucher et al., 2019). Wheeler graphs are a general framework encompassing several indexes based on the Burrows-Wheeler transform (BWT), such as the FM-index. Wheeler graphs admit a succinct representation which can be further compacted by employing the idea of tunnelling, which exploits redundancies in the form of parallel, equally-labelled paths called blocks that can be merged into a single path. The problem of finding the optimal set of blocks for tunnelling, i.e. the one that minimizes the size of the resulting Wheeler graph, is known to be NP-complete and remains the most computationally challenging part of the tunnelling process. To find an adequate set of blocks in less time, we propose a new method based on the prefix-free parsing (PFP). The idea of PFP is to divide the input text into phrases of roughly equal sizes that overlap by a fixed number of characters. The phrases are then sorted lexicographically. The original text is represented by a sequence of phrase ranks (the parse) and a list of all used phrases (the dictionary). In repetitive texts, the PFP representation of the text is generally much shorter than the original since individual phrases are used many times in the parse, thus reducing the size of the dictionary. To speed up the block selection for tunnelling, we apply the PFP to obtain the parse and the dictionary of the original text, tunnel the Wheeler graph of the parse using existing heuristics and subsequently use this tunnelled parse to construct a compact Wheeler graph of the original text. Compared with constructing a Wheeler graph from the original text without PFP, our method is much faster and uses less memory on collections of pangenomic sequences. Therefore, our method enables the use of Wheeler graphs as a pangenomic reference for real-world pangenomic datasets.

Cite as

Adrián Goga and Andrej Baláž. Prefix-Free Parsing for Building Large Tunnelled Wheeler Graphs. In 22nd International Workshop on Algorithms in Bioinformatics (WABI 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 242, pp. 18:1-18:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{goga_et_al:LIPIcs.WABI.2022.18,
  author =	{Goga, Adri\'{a}n and Bal\'{a}\v{z}, Andrej},
  title =	{{Prefix-Free Parsing for Building Large Tunnelled Wheeler Graphs}},
  booktitle =	{22nd International Workshop on Algorithms in Bioinformatics (WABI 2022)},
  pages =	{18:1--18:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-243-3},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{242},
  editor =	{Boucher, Christina and Rahmann, Sven},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2022.18},
  URN =		{urn:nbn:de:0030-drops-170529},
  doi =		{10.4230/LIPIcs.WABI.2022.18},
  annote =	{Keywords: Wheeler graphs, BWT tunnelling, prefix-free parsing, pangenomic graphs}
}
Document
Reversible Term Rewriting

Authors: Naoki Nishida, Adrián Palacios, and Germán Vidal

Published in: LIPIcs, Volume 52, 1st International Conference on Formal Structures for Computation and Deduction (FSCD 2016)


Abstract
Essentially, in a reversible programming language, for each forward computation step from state S to state S', there exists a constructive and deterministic method to go backwards from state S' to state S. Besides its theoretical interest, reversible computation is a fundamental concept which is relevant in many different areas like cellular automata, bidirectional program transformation, or quantum computing, to name a few. In this paper, we focus on term rewriting, a computation model that underlies most rule-based programming languages. In general, term rewriting is not reversible, even for injective functions; namely, given a rewrite step t1 -> t2, we do not always have a decidable and deterministic method to get t1 from t2. Here, we introduce a conservative extension of term rewriting that becomes reversible. Furthermore, we also define a transformation to make a rewrite system reversible using standard term rewriting.

Cite as

Naoki Nishida, Adrián Palacios, and Germán Vidal. Reversible Term Rewriting. In 1st International Conference on Formal Structures for Computation and Deduction (FSCD 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 52, pp. 28:1-28:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{nishida_et_al:LIPIcs.FSCD.2016.28,
  author =	{Nishida, Naoki and Palacios, Adri\'{a}n and Vidal, Germ\'{a}n},
  title =	{{Reversible Term Rewriting}},
  booktitle =	{1st International Conference on Formal Structures for Computation and Deduction (FSCD 2016)},
  pages =	{28:1--28:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-010-1},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{52},
  editor =	{Kesner, Delia and Pientka, Brigitte},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSCD.2016.28},
  URN =		{urn:nbn:de:0030-drops-59841},
  doi =		{10.4230/LIPIcs.FSCD.2016.28},
  annote =	{Keywords: term rewriting, reversible computation, program transformation}
}
Document
Two-Restricted One Context Unification is in Polynomial Time

Authors: Adrià Gascón, Manfred Schmidt-Schauß, and Ashish Tiwari

Published in: LIPIcs, Volume 41, 24th EACSL Annual Conference on Computer Science Logic (CSL 2015)


Abstract
One Context Unification (1CU) extends first-order unification by introducing a single context variable. This problem was recently shown to be in NP, but it is not known to be solvable in polynomial time. We show that the case of 1CU where the context variable occurs at most twice in the input (1CU2r) is solvable in polynomial time. Moreover, a polynomial representation of all solutions can also be computed in polynomial time. The 1CU2r problem is important as it is used as a subroutine in polynomial time algorithms for several more-general classes of 1CU problem. Our algorithm can be seen as an extension of the usual rules of first-order unification and can be used to solve related problems in polynomial time, such as first-order unification of two terms that tolerates one clash. All our results assume that the input terms are represented as Directed Acyclic Graphs.

Cite as

Adrià Gascón, Manfred Schmidt-Schauß, and Ashish Tiwari. Two-Restricted One Context Unification is in Polynomial Time. In 24th EACSL Annual Conference on Computer Science Logic (CSL 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 41, pp. 405-422, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{gascon_et_al:LIPIcs.CSL.2015.405,
  author =	{Gasc\'{o}n, Adri\`{a} and Schmidt-Schau{\ss}, Manfred and Tiwari, Ashish},
  title =	{{Two-Restricted One Context Unification is in Polynomial Time}},
  booktitle =	{24th EACSL Annual Conference on Computer Science Logic (CSL 2015)},
  pages =	{405--422},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-90-3},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{41},
  editor =	{Kreutzer, Stephan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CSL.2015.405},
  URN =		{urn:nbn:de:0030-drops-54289},
  doi =		{10.4230/LIPIcs.CSL.2015.405},
  annote =	{Keywords: context unification, first-order unification, deduction, type checking}
}
Document
Towards Modelling Actor-Based Concurrency in Term Rewriting

Authors: Adrián Palacios and Germán Vidal

Published in: OASIcs, Volume 46, 2nd International Workshop on Rewriting Techniques for Program Transformations and Evaluation (WPTE 2015)


Abstract
In this work, we introduce a scheme for modelling actor systems within sequential term rewriting. In our proposal, a TRS consists of the union of three components: the functional part (which is specific of a system), a set of rules for reducing concurrent actions, and a set of rules for defining a particular scheduling policy. A key ingredient of our approach is that concurrent systems are modelled by terms in which concurrent actions can never occur inside user-defined function calls. This assumption greatly simplifies the definition of the semantics for concurrent actions, since no term traversal will be needed. We prove that these systems are well defined in the sense that concurrent actions can always be reduced. Our approach can be used as a basis for modelling actor-based concurrent programs, which can then be analyzed using existing techniques for term rewrite systems.

Cite as

Adrián Palacios and Germán Vidal. Towards Modelling Actor-Based Concurrency in Term Rewriting. In 2nd International Workshop on Rewriting Techniques for Program Transformations and Evaluation (WPTE 2015). Open Access Series in Informatics (OASIcs), Volume 46, pp. 19-29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{palacios_et_al:OASIcs.WPTE.2015.19,
  author =	{Palacios, Adri\'{a}n and Vidal, Germ\'{a}n},
  title =	{{Towards Modelling Actor-Based Concurrency in Term Rewriting}},
  booktitle =	{2nd International Workshop on Rewriting Techniques for Program Transformations and Evaluation (WPTE 2015)},
  pages =	{19--29},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-94-1},
  ISSN =	{2190-6807},
  year =	{2015},
  volume =	{46},
  editor =	{Chiba, Yuki and Escobar, Santiago and Nishida, Naoki and Sabel, David and Schmidt-Schau{\ss}, Manfred},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.WPTE.2015.19},
  URN =		{urn:nbn:de:0030-drops-51792},
  doi =		{10.4230/OASIcs.WPTE.2015.19},
  annote =	{Keywords: concurrency, actor model, rewriting}
}
Document
First-Order Unification on Compressed Terms

Authors: Adrià Gascón, Sebastian Maneth, and Lander Ramos

Published in: LIPIcs, Volume 10, 22nd International Conference on Rewriting Techniques and Applications (RTA'11) (2011)


Abstract
Singleton Tree Grammars (STGs) have recently drawn considerable attention. They generalize the sharing of subtrees known from DAGs to sharing of connected subgraphs. This allows to obtain smaller in-memory representations of trees than with DAGs. In the past years some important tree algorithms were proved to perform efficiently (without decompression) over STGs; e.g., type checking, equivalence checking, and unification. We present a tool that implements an extension of the unification algorithm for STGs. This algorithm makes extensive use of equivalence checking. For the latter we implemented two variants, the classical exact one and a recent randomized one. Our experiments show that the randomized algorithm performs better. The running times are also compared to those of unification over uncompressed trees.

Cite as

Adrià Gascón, Sebastian Maneth, and Lander Ramos. First-Order Unification on Compressed Terms. In 22nd International Conference on Rewriting Techniques and Applications (RTA'11). Leibniz International Proceedings in Informatics (LIPIcs), Volume 10, pp. 51-60, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{gascon_et_al:LIPIcs.RTA.2011.51,
  author =	{Gasc\'{o}n, Adri\`{a} and Maneth, Sebastian and Ramos, Lander},
  title =	{{First-Order Unification on Compressed Terms}},
  booktitle =	{22nd International Conference on Rewriting Techniques and Applications (RTA'11)},
  pages =	{51--60},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-30-9},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{10},
  editor =	{Schmidt-Schauss, Manfred},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.RTA.2011.51},
  URN =		{urn:nbn:de:0030-drops-31263},
  doi =		{10.4230/LIPIcs.RTA.2011.51},
  annote =	{Keywords: unification, matching, grammars, compression, STG, system C++}
}
  • Refine by Author
  • 2 Gascón, Adrià
  • 2 Palacios, Adrián
  • 2 Vidal, Germán
  • 1 Baláž, Andrej
  • 1 Goga, Adrián
  • Show More...

  • Refine by Classification
  • 1 Hardware → Theorem proving and SAT solving
  • 1 Theory of computation → Theory and algorithms for application domains

  • Refine by Keyword
  • 1 BWT tunnelling
  • 1 Interference
  • 1 SAT solving
  • 1 STG
  • 1 Unsatisfiability proofs
  • Show More...

  • Refine by Type
  • 6 document

  • Refine by Publication Year
  • 2 2015
  • 1 2011
  • 1 2016
  • 1 2022
  • 1 2023

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail