137 Search Results for "A�mann, Uwe"


Document
Structural Operational Semantics for Heterogeneously Typed Coalgebras

Authors: Harald König, Uwe Wolter, and Tim Kräuter

Published in: LIPIcs, Volume 270, 10th Conference on Algebra and Coalgebra in Computer Science (CALCO 2023)


Abstract
Concurrently interacting components of a modular software architecture are heterogeneously structured behavioural models. We consider them as coalgebras based on different endofunctors. We formalize the composition of these coalgebras as specially tailored segments of distributive laws of the bialgebraic approach of Turi and Plotkin. The resulting categorical rules for structural operational semantics involve many-sorted algebraic specifications, which leads to a description of the components together with the composed system as a single holistic behavioural system. We evaluate our approach by showing that observational equivalence is a congruence with respect to the algebraic composition operation.

Cite as

Harald König, Uwe Wolter, and Tim Kräuter. Structural Operational Semantics for Heterogeneously Typed Coalgebras. In 10th Conference on Algebra and Coalgebra in Computer Science (CALCO 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 270, pp. 7:1-7:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{konig_et_al:LIPIcs.CALCO.2023.7,
  author =	{K\"{o}nig, Harald and Wolter, Uwe and Kr\"{a}uter, Tim},
  title =	{{Structural Operational Semantics for Heterogeneously Typed Coalgebras}},
  booktitle =	{10th Conference on Algebra and Coalgebra in Computer Science (CALCO 2023)},
  pages =	{7:1--7:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-287-7},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{270},
  editor =	{Baldan, Paolo and de Paiva, Valeria},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CALCO.2023.7},
  URN =		{urn:nbn:de:0030-drops-188048},
  doi =		{10.4230/LIPIcs.CALCO.2023.7},
  annote =	{Keywords: Coalgebra, Bialgebra, Structural operational semantics, Compositionality}
}
Document
Kolmogorov Complexity Characterizes Statistical Zero Knowledge

Authors: Eric Allender, Shuichi Hirahara, and Harsha Tirumala

Published in: LIPIcs, Volume 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023)


Abstract
We show that a decidable promise problem has a non-interactive statistical zero-knowledge proof system if and only if it is randomly reducible via an honest polynomial-time reduction to a promise problem for Kolmogorov-random strings, with a superlogarithmic additive approximation term. This extends recent work by Saks and Santhanam (CCC 2022). We build on this to give new characterizations of Statistical Zero Knowledge SZK, as well as the related classes NISZK_L and SZK_L.

Cite as

Eric Allender, Shuichi Hirahara, and Harsha Tirumala. Kolmogorov Complexity Characterizes Statistical Zero Knowledge. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 251, pp. 3:1-3:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{allender_et_al:LIPIcs.ITCS.2023.3,
  author =	{Allender, Eric and Hirahara, Shuichi and Tirumala, Harsha},
  title =	{{Kolmogorov Complexity Characterizes Statistical Zero Knowledge}},
  booktitle =	{14th Innovations in Theoretical Computer Science Conference (ITCS 2023)},
  pages =	{3:1--3:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-263-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{251},
  editor =	{Tauman Kalai, Yael},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2023.3},
  URN =		{urn:nbn:de:0030-drops-175063},
  doi =		{10.4230/LIPIcs.ITCS.2023.3},
  annote =	{Keywords: Kolmogorov Complexity, Interactive Proofs}
}
Document
Prefix-Free Parsing for Building Large Tunnelled Wheeler Graphs

Authors: Adrián Goga and Andrej Baláž

Published in: LIPIcs, Volume 242, 22nd International Workshop on Algorithms in Bioinformatics (WABI 2022)


Abstract
We propose a new technique for creating a space-efficient index for large repetitive text collections, such as pangenomic databases containing sequences of many individuals from the same species. We combine two recent techniques from this area: Wheeler graphs (Gagie et al., 2017) and prefix-free parsing (PFP, Boucher et al., 2019). Wheeler graphs are a general framework encompassing several indexes based on the Burrows-Wheeler transform (BWT), such as the FM-index. Wheeler graphs admit a succinct representation which can be further compacted by employing the idea of tunnelling, which exploits redundancies in the form of parallel, equally-labelled paths called blocks that can be merged into a single path. The problem of finding the optimal set of blocks for tunnelling, i.e. the one that minimizes the size of the resulting Wheeler graph, is known to be NP-complete and remains the most computationally challenging part of the tunnelling process. To find an adequate set of blocks in less time, we propose a new method based on the prefix-free parsing (PFP). The idea of PFP is to divide the input text into phrases of roughly equal sizes that overlap by a fixed number of characters. The phrases are then sorted lexicographically. The original text is represented by a sequence of phrase ranks (the parse) and a list of all used phrases (the dictionary). In repetitive texts, the PFP representation of the text is generally much shorter than the original since individual phrases are used many times in the parse, thus reducing the size of the dictionary. To speed up the block selection for tunnelling, we apply the PFP to obtain the parse and the dictionary of the original text, tunnel the Wheeler graph of the parse using existing heuristics and subsequently use this tunnelled parse to construct a compact Wheeler graph of the original text. Compared with constructing a Wheeler graph from the original text without PFP, our method is much faster and uses less memory on collections of pangenomic sequences. Therefore, our method enables the use of Wheeler graphs as a pangenomic reference for real-world pangenomic datasets.

Cite as

Adrián Goga and Andrej Baláž. Prefix-Free Parsing for Building Large Tunnelled Wheeler Graphs. In 22nd International Workshop on Algorithms in Bioinformatics (WABI 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 242, pp. 18:1-18:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{goga_et_al:LIPIcs.WABI.2022.18,
  author =	{Goga, Adri\'{a}n and Bal\'{a}\v{z}, Andrej},
  title =	{{Prefix-Free Parsing for Building Large Tunnelled Wheeler Graphs}},
  booktitle =	{22nd International Workshop on Algorithms in Bioinformatics (WABI 2022)},
  pages =	{18:1--18:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-243-3},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{242},
  editor =	{Boucher, Christina and Rahmann, Sven},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2022.18},
  URN =		{urn:nbn:de:0030-drops-170529},
  doi =		{10.4230/LIPIcs.WABI.2022.18},
  annote =	{Keywords: Wheeler graphs, BWT tunnelling, prefix-free parsing, pangenomic graphs}
}
Document
Data Structures for Modern Memory and Storage Hierarchies (Dagstuhl Seminar 21283)

Authors: Stratos Idreos, Viktor Leis, Kai-Uwe Sattler, and Margo Seltzer

Published in: Dagstuhl Reports, Volume 11, Issue 6 (2021)


Abstract
This report documents the program and the outcomes of Dagstuhl Seminar 21283 "Data Structures for Modern Memory and Storage Hierarchies". For decades, computers consisted of a CPU, volatile main memory, and persistent disk. Today, modern storage technologies such as flash and persistent memory as well as the seemingly inevitable migration into virtualized cloud instances, connected through high-speed networks, have radically changed the hardware landscape. These technologies have major implications on how to design data structures and high-performance systems software. The seminar discussed how to adapt data structures and software systems to this new hardware landscape.

Cite as

Stratos Idreos, Viktor Leis, Kai-Uwe Sattler, and Margo Seltzer. Data Structures for Modern Memory and Storage Hierarchies (Dagstuhl Seminar 21283). In Dagstuhl Reports, Volume 11, Issue 6, pp. 38-53, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@Article{idreos_et_al:DagRep.11.6.38,
  author =	{Idreos, Stratos and Leis, Viktor and Sattler, Kai-Uwe and Seltzer, Margo},
  title =	{{Data Structures for Modern Memory and Storage Hierarchies (Dagstuhl Seminar 21283)}},
  pages =	{38--53},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2021},
  volume =	{11},
  number =	{6},
  editor =	{Idreos, Stratos and Leis, Viktor and Sattler, Kai-Uwe and Seltzer, Margo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.11.6.38},
  URN =		{urn:nbn:de:0030-drops-155797},
  doi =		{10.4230/DagRep.11.6.38},
  annote =	{Keywords: Cloud, Data Structures, Database Systems, Flash, Near-Data Processing, Persistent Memory}
}
Document
Compressing and Indexing Aligned Readsets

Authors: Travis Gagie, Garance Gourdel, and Giovanni Manzini

Published in: LIPIcs, Volume 201, 21st International Workshop on Algorithms in Bioinformatics (WABI 2021)


Abstract
Compressed full-text indexes are one of the main success stories of bioinformatics data structures but even they struggle to handle some DNA readsets. This may seem surprising since, at least when dealing with short reads from the same individual, the readset will be highly repetitive and, thus, highly compressible. If we are not careful, however, this advantage can be more than offset by two disadvantages: first, since most base pairs are included in at least tens reads each, the uncompressed readset is likely to be at least an order of magnitude larger than the individual’s uncompressed genome; second, these indexes usually pay some space overhead for each string they store, and the total overhead can be substantial when dealing with millions of reads. The most successful compressed full-text indexes for readsets so far are based on the Extended Burrows-Wheeler Transform (EBWT) and use a sorting heuristic to try to reduce the space overhead per read, but they still treat the reads as separate strings and thus may not take full advantage of the readset’s structure. For example, if we have already assembled an individual’s genome from the readset, then we can usually use it to compress the readset well: e.g., we store the gap-coded list of reads' starting positions; we store the list of their lengths, which is often highly compressible; and we store information about the sequencing errors, which are rare with short reads. There is nowhere, however, where we can plug an assembled genome into the EBWT. In this paper we show how to use one or more assembled or partially assembled genome as the basis for a compressed full-text index of its readset. Specifically, we build a labelled tree by taking the assembled genome as a trunk and grafting onto it the reads that align to it, at the starting positions of their alignments. Next, we compute the eXtended Burrows-Wheeler Transform (XBWT) of the resulting labelled tree and build a compressed full-text index on that. Although this index can occasionally return false positives, it is usually much more compact than the alternatives. Following the established practice for datasets with many repetitions, we compare different full-text indices by looking at the number of runs in the transformed strings. For a human Chr19 readset our preliminary experiments show that eliminating separators characters from the EBWT reduces the number of runs by 19%, from 220 million to 178 million, and using the XBWT reduces it by a further 15%, to 150 million.

Cite as

Travis Gagie, Garance Gourdel, and Giovanni Manzini. Compressing and Indexing Aligned Readsets. In 21st International Workshop on Algorithms in Bioinformatics (WABI 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 201, pp. 13:1-13:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{gagie_et_al:LIPIcs.WABI.2021.13,
  author =	{Gagie, Travis and Gourdel, Garance and Manzini, Giovanni},
  title =	{{Compressing and Indexing Aligned Readsets}},
  booktitle =	{21st International Workshop on Algorithms in Bioinformatics (WABI 2021)},
  pages =	{13:1--13:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-200-6},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{201},
  editor =	{Carbone, Alessandra and El-Kebir, Mohammed},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2021.13},
  URN =		{urn:nbn:de:0030-drops-143660},
  doi =		{10.4230/LIPIcs.WABI.2021.13},
  annote =	{Keywords: data compression, compact data structures, FM-index, Burrows-Wheeler Transform, EBWT, XBWT, DNA reads}
}
Document
Inter-Blockchain Protocols with the Isabelle Infrastructure Framework

Authors: Florian Kammüller and Uwe Nestmann

Published in: OASIcs, Volume 84, 2nd Workshop on Formal Methods for Blockchains (FMBC 2020)


Abstract
The main incentives of blockchain technology are distribution and distributed change, consistency, and consensus. Beyond just being a distributed ledger for digital currency, smart contracts add transaction protocols to blockchains to execute terms of a contract in a blockchain network. Inter-blockchain (IBC) protocols define and control exchanges between different blockchains. The Isabelle Infrastructure framework {has been designed to} serve security and privacy for IoT architectures by formal specification and stepwise attack analysis and refinement. A major case study of this framework is a distributed health care scenario for data consistency for GDPR compliance. This application led to the development of an abstract system specification of blockchains for IoT infrastructures. In this paper, we first give a summary of the concept of IBC. We then introduce an instantiation of the Isabelle Infrastructure framework to model blockchains. Based on this we extend this model to instantiate different blockchains and formalize IBC protocols. We prove the concept by defining the generic property of global consistency and prove it in Isabelle.

Cite as

Florian Kammüller and Uwe Nestmann. Inter-Blockchain Protocols with the Isabelle Infrastructure Framework. In 2nd Workshop on Formal Methods for Blockchains (FMBC 2020). Open Access Series in Informatics (OASIcs), Volume 84, pp. 11:1-11:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{kammuller_et_al:OASIcs.FMBC.2020.11,
  author =	{Kamm\"{u}ller, Florian and Nestmann, Uwe},
  title =	{{Inter-Blockchain Protocols with the Isabelle Infrastructure Framework}},
  booktitle =	{2nd Workshop on Formal Methods for Blockchains (FMBC 2020)},
  pages =	{11:1--11:12},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-169-6},
  ISSN =	{2190-6807},
  year =	{2020},
  volume =	{84},
  editor =	{Bernardo, Bruno and Marmsoler, Diego},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.FMBC.2020.11},
  URN =		{urn:nbn:de:0030-drops-134249},
  doi =		{10.4230/OASIcs.FMBC.2020.11},
  annote =	{Keywords: Blockchain, smart contracts, interactive theorem proving, inter-blockchain protocols}
}
Document
SAT and Interactions (Dagstuhl Seminar 20061)

Authors: Olaf Beyersdorff, Uwe Egly, Meena Mahajan, and Cláudia Nalon

Published in: Dagstuhl Reports, Volume 10, Issue 2 (2020)


Abstract
This report documents the program and the outcomes of Dagstuhl Seminar 20061 "SAT and Interactions". The seminar brought together theoreticians and practitioners from the areas of proof complexity and proof theory, SAT and QBF solving, MaxSAT, and modal logics, who discussed recent developments in their fields and embarked on an interdisciplinary exchange of ideas and techniques between these neighbouring subfields of SAT.

Cite as

Olaf Beyersdorff, Uwe Egly, Meena Mahajan, and Cláudia Nalon. SAT and Interactions (Dagstuhl Seminar 20061). In Dagstuhl Reports, Volume 10, Issue 2, pp. 1-18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@Article{beyersdorff_et_al:DagRep.10.2.1,
  author =	{Beyersdorff, Olaf and Egly, Uwe and Mahajan, Meena and Nalon, Cl\'{a}udia},
  title =	{{SAT and Interactions (Dagstuhl Seminar 20061)}},
  pages =	{1--18},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2020},
  volume =	{10},
  number =	{2},
  editor =	{Beyersdorff, Olaf and Egly, Uwe and Mahajan, Meena and Nalon, Cl\'{a}udia},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.10.2.1},
  URN =		{urn:nbn:de:0030-drops-130576},
  doi =		{10.4230/DagRep.10.2.1},
  annote =	{Keywords: SAT, MaxSAT, QBF, proof complexity, deep inference, modal logic, solving}
}
Document
Introduction to Microservice API Patterns (MAP)

Authors: Olaf Zimmermann, Mirko Stocker, Daniel Lübke, Cesare Pautasso, and Uwe Zdun

Published in: OASIcs, Volume 78, Joint Post-proceedings of the First and Second International Conference on Microservices (Microservices 2017/2019)


Abstract
The Microservice API Patterns (MAP) language and supporting website premiered under this name at Microservices 2019. MAP distills proven, platform- and technology-independent solutions to recurring (micro-)service design and interface specification problems such as finding well-fitting service granularities, rightsizing message representations, and managing the evolution of APIs and their implementations. In this paper, we motivate the need for such a pattern language, outline the language organization and present two exemplary patterns describing alternative options for representing nested data. We also identify future research and development directions.

Cite as

Olaf Zimmermann, Mirko Stocker, Daniel Lübke, Cesare Pautasso, and Uwe Zdun. Introduction to Microservice API Patterns (MAP). In Joint Post-proceedings of the First and Second International Conference on Microservices (Microservices 2017/2019). Open Access Series in Informatics (OASIcs), Volume 78, pp. 4:1-4:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{zimmermann_et_al:OASIcs.Microservices.2017-2019.4,
  author =	{Zimmermann, Olaf and Stocker, Mirko and L\"{u}bke, Daniel and Pautasso, Cesare and Zdun, Uwe},
  title =	{{Introduction to Microservice API Patterns (MAP)}},
  booktitle =	{Joint Post-proceedings of the First and Second International Conference on Microservices (Microservices 2017/2019)},
  pages =	{4:1--4:17},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-137-5},
  ISSN =	{2190-6807},
  year =	{2020},
  volume =	{78},
  editor =	{Cruz-Filipe, Lu{\'\i}s and Giallorenzo, Saverio and Montesi, Fabrizio and Peressotti, Marco and Rademacher, Florian and Sachweh, Sabine},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.Microservices.2017-2019.4},
  URN =		{urn:nbn:de:0030-drops-118268},
  doi =		{10.4230/OASIcs.Microservices.2017-2019.4},
  annote =	{Keywords: application programming interfaces, distributed systems, enterprise application integration, service-oriented computing, software architecture}
}
Document
Short Paper
Translation-Based Dictionary Alignment for Under-Resourced Bantu Languages

Authors: Thomas Eckart, Sonja Bosch, Dirk Goldhahn, Uwe Quasthoff, and Bettina Klimek

Published in: OASIcs, Volume 70, 2nd Conference on Language, Data and Knowledge (LDK 2019)


Abstract
Despite a large number of active speakers, most Bantu languages can be considered as under- or less-resourced languages. This includes especially the current situation of lexicographical data, which is highly unsatisfactory concerning the size, quality and consistency in format and provided information. Unfortunately, this does not only hold for the amount and quality of data for monolingual dictionaries, but also for their lack of interconnection to form a network of dictionaries. Current endeavours to promote the use of Bantu languages in primary and secondary education in countries like South Africa show the urgent need for high-quality digital dictionaries. This contribution describes a prototypical implementation for aligning Xhosa, Zimbabwean Ndebele and Kalanga language dictionaries based on their English translations using simple string matching techniques and via WordNet URIs. The RDF-based representation of the data using the Bantu Language Model (BLM) and - partial - references to the established WordNet dataset supported this process significantly.

Cite as

Thomas Eckart, Sonja Bosch, Dirk Goldhahn, Uwe Quasthoff, and Bettina Klimek. Translation-Based Dictionary Alignment for Under-Resourced Bantu Languages. In 2nd Conference on Language, Data and Knowledge (LDK 2019). Open Access Series in Informatics (OASIcs), Volume 70, pp. 17:1-17:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{eckart_et_al:OASIcs.LDK.2019.17,
  author =	{Eckart, Thomas and Bosch, Sonja and Goldhahn, Dirk and Quasthoff, Uwe and Klimek, Bettina},
  title =	{{Translation-Based Dictionary Alignment for Under-Resourced Bantu Languages}},
  booktitle =	{2nd Conference on Language, Data and Knowledge (LDK 2019)},
  pages =	{17:1--17:11},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-105-4},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{70},
  editor =	{Eskevich, Maria and de Melo, Gerard and F\"{a}th, Christian and McCrae, John P. and Buitelaar, Paul and Chiarcos, Christian and Klimek, Bettina and Dojchinovski, Milan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/OASIcs.LDK.2019.17},
  URN =		{urn:nbn:de:0030-drops-103819},
  doi =		{10.4230/OASIcs.LDK.2019.17},
  annote =	{Keywords: Cross-language dictionary alignment, Bantu languages, translation, linguistic linked data, under-resourced languages}
}
Document
Database Architectures for Modern Hardware (Dagstuhl Seminar 18251)

Authors: Peter A. Boncz, Goetz Graefe, Binsheng He, and Kai-Uwe Sattler

Published in: Dagstuhl Reports, Volume 8, Issue 6 (2019)


Abstract
The requirements of emerging applications on the one hand and the trends in computing hardware and systems on the other hand demand a fundamental rethinking of current data management architectures. Based on the broad consensus that this rethinking requires expertise from different research disciplines, the goal of this seminar was to bring together researchers and practitioners from these areas representing both the software and hardware sides and to foster cross-cutting architectural discussions. The outcome of this seminar was not only an identification of promising hardware technologies and their exploitation in data management systems but also a set of use cases, studies, and experiments for new architectural concepts.

Cite as

Peter A. Boncz, Goetz Graefe, Binsheng He, and Kai-Uwe Sattler. Database Architectures for Modern Hardware (Dagstuhl Seminar 18251). In Dagstuhl Reports, Volume 8, Issue 6, pp. 63-76, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@Article{boncz_et_al:DagRep.8.6.63,
  author =	{Boncz, Peter A. and Graefe, Goetz and He, Binsheng and Sattler, Kai-Uwe},
  title =	{{Database Architectures for Modern Hardware (Dagstuhl Seminar 18251)}},
  pages =	{63--76},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2019},
  volume =	{8},
  number =	{6},
  editor =	{Boncz, Peter A. and Graefe, Goetz and He, Binsheng and Sattler, Kai-Uwe},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.8.6.63},
  URN =		{urn:nbn:de:0030-drops-100561},
  doi =		{10.4230/DagRep.8.6.63},
  annote =	{Keywords: co-processors, computer architecture, database systems, hardware support for databases, non-volatile memory}
}
Document
On Undetected Redundancy in the Burrows-Wheeler Transform

Authors: Uwe Baier

Published in: LIPIcs, Volume 105, 29th Annual Symposium on Combinatorial Pattern Matching (CPM 2018)


Abstract
The Burrows-Wheeler-Transform (BWT) is an invertible permutation of a text known to be highly compressible but also useful for sequence analysis, what makes the BWT highly attractive for lossless data compression. In this paper, we present a new technique to reduce the size of a BWT using its combinatorial properties, while keeping it invertible. The technique can be applied to any BWT-based compressor, and, as experiments show, is able to reduce the encoding size by 8-16 % on average and up to 33-57 % in the best cases (depending on the BWT-compressor used), making BWT-based compressors competitive or even superior to today's best lossless compressors.

Cite as

Uwe Baier. On Undetected Redundancy in the Burrows-Wheeler Transform. In 29th Annual Symposium on Combinatorial Pattern Matching (CPM 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 105, pp. 3:1-3:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{baier:LIPIcs.CPM.2018.3,
  author =	{Baier, Uwe},
  title =	{{On Undetected Redundancy in the Burrows-Wheeler Transform}},
  booktitle =	{29th Annual Symposium on Combinatorial Pattern Matching (CPM 2018)},
  pages =	{3:1--3:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-074-3},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{105},
  editor =	{Navarro, Gonzalo and Sankoff, David and Zhu, Binhai},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CPM.2018.3},
  URN =		{urn:nbn:de:0030-drops-87049},
  doi =		{10.4230/LIPIcs.CPM.2018.3},
  annote =	{Keywords: Lossless data compression, BWT, Tunneling}
}
Document
Being Van Kampen in Presheaf Topoi is a Uniqueness Property

Authors: Harald König and Uwe Wolter

Published in: LIPIcs, Volume 72, 7th Conference on Algebra and Coalgebra in Computer Science (CALCO 2017)


Abstract
Fibred semantics is the foundation of the model-instance pattern of software engineering. Software models can often be formalized as objects of presheaf topoi, e.g. the category of directed graphs. Multimodeling requires to construct colimits of diagrams of single models and their instances, while decomposition of instances of the multimodel is given by pullback. Compositionality requires an exact interplay of these operations, i.e., the diagrams must enjoy the Van Kampen property. However, checking the validity of the Van Kampen property algorithmically based on its definition is often impossible. In this paper we state a necessary and sufficient yet easily checkable condition for the Van Kampen property to hold for diagrams in presheaf topoi. It is based on a uniqueness property of path-like structures within the defining congruence classes that make up the colimiting cocone of the models. We thus add to the statement "Being Van Kampen is a Universal Property" by Heindel and Sobocinski presented at CALCO 2009 the fact that the Van Kampen property reveals a set-based structural uniqueness feature.

Cite as

Harald König and Uwe Wolter. Being Van Kampen in Presheaf Topoi is a Uniqueness Property. In 7th Conference on Algebra and Coalgebra in Computer Science (CALCO 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 72, pp. 16:1-16:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{konig_et_al:LIPIcs.CALCO.2017.16,
  author =	{K\"{o}nig, Harald and Wolter, Uwe},
  title =	{{Being Van Kampen in Presheaf Topoi is a Uniqueness Property}},
  booktitle =	{7th Conference on Algebra and Coalgebra in Computer Science (CALCO 2017)},
  pages =	{16:1--16:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-033-0},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{72},
  editor =	{Bonchi, Filippo and K\"{o}nig, Barbara},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CALCO.2017.16},
  URN =		{urn:nbn:de:0030-drops-80320},
  doi =		{10.4230/LIPIcs.CALCO.2017.16},
  annote =	{Keywords: Van Kampen Cocone, Presheaf Topos, Fibred Semantics, Mapping Path}
}
Document
Invited Talk
Forward Progress on GPU Concurrency (Invited Talk)

Authors: Alastair F. Donaldson, Jeroen Ketema, Tyler Sorensen, and John Wickerson

Published in: LIPIcs, Volume 85, 28th International Conference on Concurrency Theory (CONCUR 2017)


Abstract
The tutorial at CONCUR will provide a practical overview of work undertaken over the last six years in the Multicore Programming Group at Imperial College London, and with collaborators internationally, related to understanding and reasoning about concurrency in software designed for acceleration on GPUs. In this article we provide an overview of this work, which includes contributions to data race analysis, compiler testing, memory model understanding and formalisation, and most recently efforts to enable portable GPU implementations of algorithms that require forward progress guarantees.

Cite as

Alastair F. Donaldson, Jeroen Ketema, Tyler Sorensen, and John Wickerson. Forward Progress on GPU Concurrency (Invited Talk). In 28th International Conference on Concurrency Theory (CONCUR 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 85, pp. 1:1-1:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{donaldson_et_al:LIPIcs.CONCUR.2017.1,
  author =	{Donaldson, Alastair F. and Ketema, Jeroen and Sorensen, Tyler and Wickerson, John},
  title =	{{Forward Progress on GPU Concurrency}},
  booktitle =	{28th International Conference on Concurrency Theory (CONCUR 2017)},
  pages =	{1:1--1:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-048-4},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{85},
  editor =	{Meyer, Roland and Nestmann, Uwe},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2017.1},
  URN =		{urn:nbn:de:0030-drops-78055},
  doi =		{10.4230/LIPIcs.CONCUR.2017.1},
  annote =	{Keywords: GPUs, concurrency, formal verification, memory models, data races}
}
Document
Invited Talk
Admissibility in Games with Imperfect Information (Invited Talk)

Authors: Romain Brenguier, Arno Pauly, Jean-François Raskin, and Ocan Sankur

Published in: LIPIcs, Volume 85, 28th International Conference on Concurrency Theory (CONCUR 2017)


Abstract
In this invited paper, we study the concept of admissible strategies for two player win/lose infinite sequential games with imperfect information. We show that in stark contrast with the perfect information variant, admissible strategies are only guaranteed to exist when players have objectives that are closed sets. As a consequence, we also study decision problems related to the existence of admissible strategies for regular games as well as finite duration games.

Cite as

Romain Brenguier, Arno Pauly, Jean-François Raskin, and Ocan Sankur. Admissibility in Games with Imperfect Information (Invited Talk). In 28th International Conference on Concurrency Theory (CONCUR 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 85, pp. 2:1-2:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{brenguier_et_al:LIPIcs.CONCUR.2017.2,
  author =	{Brenguier, Romain and Pauly, Arno and Raskin, Jean-Fran\c{c}ois and Sankur, Ocan},
  title =	{{Admissibility in Games with Imperfect Information}},
  booktitle =	{28th International Conference on Concurrency Theory (CONCUR 2017)},
  pages =	{2:1--2:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-048-4},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{85},
  editor =	{Meyer, Roland and Nestmann, Uwe},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2017.2},
  URN =		{urn:nbn:de:0030-drops-78066},
  doi =		{10.4230/LIPIcs.CONCUR.2017.2},
  annote =	{Keywords: Admissibility, non-zero sum games, reactive synthesis, imperfect infor- mation}
}
Document
Invited Talk
Probabilistic Programming (Invited Talk)

Authors: Hongseok Yang

Published in: LIPIcs, Volume 85, 28th International Conference on Concurrency Theory (CONCUR 2017)


Abstract
Probabilistic programming refers to the idea of using standard programming constructs for specifying probabilistic models from machine learning and statistics, and employing generic inference algorithms for answering various queries on these models, such as posterior inference and estimation of model evidence. Although this idea itself is not new and was, in fact, explored by several programming-language and statistics researchers in the early 2000, it is only in the last few years that probabilistic programming has gained a large amount of attention among researchers in machine learning and programming languages, and that expressive and efficient probabilistic programming systems (such as Anglican, Church, Figaro, Infer.net, PSI, PyMC, Stan, and Venture) started to appear and have been taken up by a nontrivial number of users. The primary goal of my talk is to introduce probabilistic programming to the CONCUR/QUEST/FORMATS audience. At the end of my talk, I want the audience to understand basic results and techniques in probabilistic programming and to feel that these results and techniques are relevant or at least related to what she or he studies, although they typically come from foreign research areas, such as machine learning and statistics. My talk will contain both technical materials and lessons that I learnt from my machine-learning colleagues in Oxford, who are developing a highly-expressive higher-order probabilistic programming language, called Anglican. It will also include my work on the denotational semantics of higher-order probabilistic programming languages and their inference algorithms, which are jointly pursued with colleagues in Cambridge, Edinburgh, Oxford and Tubingen.

Cite as

Hongseok Yang. Probabilistic Programming (Invited Talk). In 28th International Conference on Concurrency Theory (CONCUR 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 85, p. 3:1, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{yang:LIPIcs.CONCUR.2017.3,
  author =	{Yang, Hongseok},
  title =	{{Probabilistic Programming}},
  booktitle =	{28th International Conference on Concurrency Theory (CONCUR 2017)},
  pages =	{3:1--3:1},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-048-4},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{85},
  editor =	{Meyer, Roland and Nestmann, Uwe},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2017.3},
  URN =		{urn:nbn:de:0030-drops-78088},
  doi =		{10.4230/LIPIcs.CONCUR.2017.3},
  annote =	{Keywords: Probabilistic programming, Machine learning, Denotational semantics}
}
  • Refine by Author
  • 6 Sattler, Kai-Uwe
  • 4 Aßmann, Uwe
  • 3 Bézivin, Jean
  • 3 Catalyurek, Umit V.
  • 3 Glässer, Uwe
  • Show More...

  • Refine by Classification
  • 2 Theory of computation → Data compression
  • 1 Applied computing → Document analysis
  • 1 Computing methodologies → Phonology / morphology
  • 1 Information systems → Data management systems
  • 1 Information systems → Dictionaries
  • Show More...

  • Refine by Keyword
  • 3 Concurrency
  • 3 software
  • 2 Abstract State Machines
  • 2 Compositionality
  • 2 Conceptual modeling
  • Show More...

  • Refine by Type
  • 137 document

  • Refine by Publication Year
  • 49 2009
  • 40 2017
  • 7 2006
  • 7 2008
  • 5 2016
  • Show More...

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail