Volume

Dagstuhl Seminar Proceedings, Volume 8261



Publication Details

  • published at: 2008-11-20
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik

Access Numbers

Documents

No documents found matching your filter selection.
Document
08261 Abstracts Collection – Structure-Based Compression of Complex Massive Data

Authors: Stefan Böttcher, Markus Lohrey, Sebastian Maneth, and Wojciech Rytter


Abstract
From June 22, 2008 to June 27, 2008 the Dagstuhl Seminar 08261 ``Structure-Based Compression of Complex Massive Data'' was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available.

Cite as

Stefan Böttcher, Markus Lohrey, Sebastian Maneth, and Wojciech Rytter. 08261 Abstracts Collection – Structure-Based Compression of Complex Massive Data. In Structure-Based Compression of Complex Massive Data. Dagstuhl Seminar Proceedings, Volume 8261, pp. 1-9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{bottcher_et_al:DagSemProc.08261.1,
  author =	{B\"{o}ttcher, Stefan and Lohrey, Markus and Maneth, Sebastian and Rytter, Wojciech},
  title =	{{08261 Abstracts Collection – Structure-Based Compression of Complex Massive Data}},
  booktitle =	{Structure-Based Compression of Complex Massive Data},
  pages =	{1--9},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8261},
  editor =	{Stefan B\"{o}ttcher and Markus Lohrey and Sebastian Maneth and Wojcieh Rytter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.08261.1},
  URN =		{urn:nbn:de:0030-drops-16948},
  doi =		{10.4230/DagSemProc.08261.1},
  annote =	{Keywords: Data compression, algorithms for compressed strings and trees, XML-compression}
}
Document
08261 Executive Summary – Structure-Based Compression of Complex Massive Data

Authors: Stefan Böttcher, Markus Lohrey, Sebastian Maneth, and Wojciech Rytter


Abstract
From 22nd June to 27th of June 2008, the Dagstuhl Seminar ``08261 Structure-Based Compression of Complex Massive Data'' took place at the Conference and Research Center (IBFI) in Dagstuhl. 22 researchers with interests in theory and application of compression and computation on compressed structures met to present their current work and to discuss future directions.

Cite as

Stefan Böttcher, Markus Lohrey, Sebastian Maneth, and Wojciech Rytter. 08261 Executive Summary – Structure-Based Compression of Complex Massive Data. In Structure-Based Compression of Complex Massive Data. Dagstuhl Seminar Proceedings, Volume 8261, pp. 1-4, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{bottcher_et_al:DagSemProc.08261.2,
  author =	{B\"{o}ttcher, Stefan and Lohrey, Markus and Maneth, Sebastian and Rytter, Wojciech},
  title =	{{08261 Executive Summary – Structure-Based Compression of Complex Massive Data}},
  booktitle =	{Structure-Based Compression of Complex Massive Data},
  pages =	{1--4},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8261},
  editor =	{Stefan B\"{o}ttcher and Markus Lohrey and Sebastian Maneth and Wojcieh Rytter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.08261.2},
  URN =		{urn:nbn:de:0030-drops-16814},
  doi =		{10.4230/DagSemProc.08261.2},
  annote =	{Keywords: Compression, Succinct Data Structure, Pattern Matching, Text Search, XML Query}
}
Document
A Rewrite Approach for Pattern Containment – Application to Query Evaluation on Compressed Documents

Authors: Barbara Fila-Kordy


Abstract
In this paper we introduce an approach that allows to handle the containment problem for the fragment XP(/,//,[ ],*) of XPath. Using rewriting techniques we define a necessary and sufficient condition for pattern containment. This rewrite view is then adapted to query evaluation on XML documents, and remains valid even if the documents are given in a compressed form, as dags.

Cite as

Barbara Fila-Kordy. A Rewrite Approach for Pattern Containment – Application to Query Evaluation on Compressed Documents. In Structure-Based Compression of Complex Massive Data. Dagstuhl Seminar Proceedings, Volume 8261, pp. 1-16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{filakordy:DagSemProc.08261.3,
  author =	{Fila-Kordy, Barbara},
  title =	{{A Rewrite Approach for Pattern Containment – Application to Query Evaluation on Compressed Documents}},
  booktitle =	{Structure-Based Compression of Complex Massive Data},
  pages =	{1--16},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8261},
  editor =	{Stefan B\"{o}ttcher and Markus Lohrey and Sebastian Maneth and Wojcieh Rytter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.08261.3},
  URN =		{urn:nbn:de:0030-drops-16798},
  doi =		{10.4230/DagSemProc.08261.3},
  annote =	{Keywords: Pattern Containment, Compressed Documents}
}
Document
A Space-Saving Approximation Algorithm for Grammar-Based Compression

Authors: Hiroshi Sakamoto


Abstract
A space-efficient approximation algorithm for the grammar-based compression problem, which requests for a given string to find a smallest context-free grammar deriving the string, is presented. For the input length n and an optimum CFG size g, the algorithm consumes only O(g log g) space and O(n log^n) time to achieve O((log^n) log n) approximation ratio to the optimum compression, where log^n is the maximum number of logarithms satisfying log log · · · logn > 1. This ratio is thus regarded to almost O(log n), which is the currently best approximation ratio. While g depends on the string, it is known that g =(log n) and g=O(n/log_k n) for strings from a k-letter alphabet [12].

Cite as

Hiroshi Sakamoto. A Space-Saving Approximation Algorithm for Grammar-Based Compression. In Structure-Based Compression of Complex Massive Data. Dagstuhl Seminar Proceedings, Volume 8261, pp. 1-14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{sakamoto:DagSemProc.08261.4,
  author =	{Sakamoto, Hiroshi},
  title =	{{A Space-Saving Approximation Algorithm for Grammar-Based Compression}},
  booktitle =	{Structure-Based Compression of Complex Massive Data},
  pages =	{1--14},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8261},
  editor =	{Stefan B\"{o}ttcher and Markus Lohrey and Sebastian Maneth and Wojcieh Rytter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.08261.4},
  URN =		{urn:nbn:de:0030-drops-16937},
  doi =		{10.4230/DagSemProc.08261.4},
  annote =	{Keywords: Grammar based compression, space efficient approximation}
}
Document
An Efficient Algorithm to Test Square-Freeness of Strings Compressed by Balanced Straight Line Program

Authors: Wataru Matsubara, Shunsuke Inenaga, and Ayumi Shinohara


Abstract
In this paper we study the problem of deciding whether a given compressed string contains a square. A string x is called a square if x = zz and z = u^k implies k = 1 and u = z. A string w is said to be square-free if no substrings of w are squares. Many efficient algorithms to test if a given string is square-free, have been developed so far. However, very little is known for testing square-freeness of a given compressed string. In this paper, we give an O(max(n^2; n log^2 N))-time O(n^2)-space solution to test square-freeness of a given compressed string, where n and N are the size of a given compressed string and the corresponding decompressed string, respectively. Our input strings are compressed by balanced straight line program (BSLP). We remark that BSLP has exponential compression, that is, N = O(2^n). Hence no decompress-then-test approaches can be better than our method in the worst case.

Cite as

Wataru Matsubara, Shunsuke Inenaga, and Ayumi Shinohara. An Efficient Algorithm to Test Square-Freeness of Strings Compressed by Balanced Straight Line Program. In Structure-Based Compression of Complex Massive Data. Dagstuhl Seminar Proceedings, Volume 8261, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{matsubara_et_al:DagSemProc.08261.5,
  author =	{Matsubara, Wataru and Inenaga, Shunsuke and Shinohara, Ayumi},
  title =	{{An Efficient Algorithm to Test Square-Freeness of Strings Compressed by Balanced Straight Line Program}},
  booktitle =	{Structure-Based Compression of Complex Massive Data},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8261},
  editor =	{Stefan B\"{o}ttcher and Markus Lohrey and Sebastian Maneth and Wojcieh Rytter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.08261.5},
  URN =		{urn:nbn:de:0030-drops-16804},
  doi =		{10.4230/DagSemProc.08261.5},
  annote =	{Keywords: Square Freeness, Straight Line Program}
}
Document
An In-Memory XQuery/XPath Engine over a Compressed Structured Text Representation

Authors: Angela Bonifati, Gregory Leighton, Veli Mäkinen, Sebastian Maneth, Gonzalo Navarro, and Andrea Pugliese


Abstract
We describe the architecture and main algorithmic design decisions for an XQuery/XPath processing engine over XML collections which will be represented using a self-indexing approach, that is, a compressed representation that will allow for basic searching and navigational operations in compressed form. The goal is a structure that occupies little space and thus permits manipulating large collections in main memory.

Cite as

Angela Bonifati, Gregory Leighton, Veli Mäkinen, Sebastian Maneth, Gonzalo Navarro, and Andrea Pugliese. An In-Memory XQuery/XPath Engine over a Compressed Structured Text Representation. In Structure-Based Compression of Complex Massive Data. Dagstuhl Seminar Proceedings, Volume 8261, pp. 1-17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{bonifati_et_al:DagSemProc.08261.6,
  author =	{Bonifati, Angela and Leighton, Gregory and M\"{a}kinen, Veli and Maneth, Sebastian and Navarro, Gonzalo and Pugliese, Andrea},
  title =	{{An In-Memory XQuery/XPath Engine over a Compressed Structured Text Representation}},
  booktitle =	{Structure-Based Compression of Complex Massive Data},
  pages =	{1--17},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8261},
  editor =	{Stefan B\"{o}ttcher and Markus Lohrey and Sebastian Maneth and Wojcieh Rytter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.08261.6},
  URN =		{urn:nbn:de:0030-drops-16776},
  doi =		{10.4230/DagSemProc.08261.6},
  annote =	{Keywords: Compressed self-index, compressed XML representation, XPath, XQuery}
}
Document
Clone Detection via Structural Abstraction

Authors: William S. Evans, Christoph W. Fraser, and Fei Ma


Abstract
This paper describes the design, implementation, and application of a new algorithm to detect cloned code. It operates on the abstract syntax trees formed by many compilers as an intermediate representation. It extends prior work by identifying clones even when arbitrary subtrees have been changed. On a 440,000-line code corpus, 20- 50%of the clones it detected were missed by previous methods. The method also identifies cloning in declarations, so it is somewhat more general than conventional procedural abstraction.

Cite as

William S. Evans, Christoph W. Fraser, and Fei Ma. Clone Detection via Structural Abstraction. In Structure-Based Compression of Complex Massive Data. Dagstuhl Seminar Proceedings, Volume 8261, pp. 1-10, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{evans_et_al:DagSemProc.08261.7,
  author =	{Evans, William S. and Fraser, Christoph W. and Ma, Fei},
  title =	{{Clone Detection via Structural Abstraction}},
  booktitle =	{Structure-Based Compression of Complex Massive Data},
  pages =	{1--10},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8261},
  editor =	{Stefan B\"{o}ttcher and Markus Lohrey and Sebastian Maneth and Wojcieh Rytter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.08261.7},
  URN =		{urn:nbn:de:0030-drops-16784},
  doi =		{10.4230/DagSemProc.08261.7},
  annote =	{Keywords: Clone Detection}
}
Document
Compression vs Queryability - A Case Study

Authors: Siva Anantharaman


Abstract
Some compromise on compression is known to be necessary, if the relative positions of the information stored by semi-structured documents are to remain accessible under queries. With this in view, we compare, on an example, the ‘query-friendliness’ of XML documents, when compressed into straightline tree grammars which are either regular or context-free. The queries considered are in a limited fragment of XPath, corresponding to a type of patterns; each such query defines naturally a non-deterministic, bottom-up ‘query automaton’ that runs just as well on a tree as on its compressed dag.

Cite as

Siva Anantharaman. Compression vs Queryability - A Case Study. In Structure-Based Compression of Complex Massive Data. Dagstuhl Seminar Proceedings, Volume 8261, pp. 1-9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{anantharaman:DagSemProc.08261.8,
  author =	{Anantharaman, Siva},
  title =	{{Compression vs Queryability - A Case Study}},
  booktitle =	{Structure-Based Compression of Complex Massive Data},
  pages =	{1--9},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8261},
  editor =	{Stefan B\"{o}ttcher and Markus Lohrey and Sebastian Maneth and Wojcieh Rytter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.08261.8},
  URN =		{urn:nbn:de:0030-drops-16762},
  doi =		{10.4230/DagSemProc.08261.8},
  annote =	{Keywords: Tree automata, Tree Grammars, Dags, XML documents, Queries}
}
Document
Optimizing XML Compression in XQueC

Authors: Andrei Arion, Angela Bonifati, Ioana Manolescu, and Andrea Pugliese


Abstract
We present our approach to the problem of optimizing compression choices in the context of the XQueC compressed XML database system. In XQueC, data items are aggregated into containers, which are further grouped to be compressed together. This way, XQueC is able to exploit data commonalities and to perform query evaluation in the compressed domain, with the aim of improving both compression and querying performance. However, different compression algorithms have different performance and support different sets of operations in the compressed domain. Therefore, choosing how to group containers and which compression algorithm to apply to each group is a challenging issue. We address this problem through an appropriate cost model and a suitable blend of heuristics which, based on a given query workload, are capable of driving appropriate compression choices.

Cite as

Andrei Arion, Angela Bonifati, Ioana Manolescu, and Andrea Pugliese. Optimizing XML Compression in XQueC. In Structure-Based Compression of Complex Massive Data. Dagstuhl Seminar Proceedings, Volume 8261, pp. 1-12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{arion_et_al:DagSemProc.08261.9,
  author =	{Arion, Andrei and Bonifati, Angela and Manolescu, Ioana and Pugliese, Andrea},
  title =	{{Optimizing XML Compression in XQueC}},
  booktitle =	{Structure-Based Compression of Complex Massive Data},
  pages =	{1--12},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8261},
  editor =	{Stefan B\"{o}ttcher and Markus Lohrey and Sebastian Maneth and Wojcieh Rytter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.08261.9},
  URN =		{urn:nbn:de:0030-drops-16924},
  doi =		{10.4230/DagSemProc.08261.9},
  annote =	{Keywords: XML compression}
}
Document
Storage and Retrieval of Individual Genomes

Authors: Veli Mäkinen, Gonzalo Navarro, Jouni Sirén, and Niko Välimäki


Abstract
A repetitive sequence collection is one where portions of a emph{base sequence} of length $n$ are repeated many times with small variations, forming a collection of total length $N$. Examples of such collections are version control data and genome sequences of individuals, where the differences can be expressed by lists of basic edit operations. Flexible and efficient data analysis on a such typically huge collection is plausible using suffix trees. However, suffix tree occupies $O(N log N)$ bits, which very soon inhibits in-memory analyses. Recent advances in full-text emph{self-indexing} reduce the space of suffix tree to $O(N log sigma)$ bits, where $sigma$ is the alphabet size. In practice, the space reduction is more than $10$-fold for example on suffix tree of Human Genome. However, this reduction remains a constant factor when more sequences are added to the collection We develop a new self-index suited for the repetitive sequence collection setting. Its expected space requirement depends only on the length $n$ of the base sequence and the number $s$ of variations in its repeated copies. That is, the space reduction is no longer constant, but depends on $N/n$. We believe the structure developed in this work will provide a fundamental basis for storage and retrieval of individual genomes as they become available due to rapid progress in the sequencing technologies.

Cite as

Veli Mäkinen, Gonzalo Navarro, Jouni Sirén, and Niko Välimäki. Storage and Retrieval of Individual Genomes. In Structure-Based Compression of Complex Massive Data. Dagstuhl Seminar Proceedings, Volume 8261, pp. 1-14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{makinen_et_al:DagSemProc.08261.10,
  author =	{M\"{a}kinen, Veli and Navarro, Gonzalo and Sir\'{e}n, Jouni and V\"{a}lim\"{a}ki, Niko},
  title =	{{Storage and Retrieval of Individual Genomes}},
  booktitle =	{Structure-Based Compression of Complex Massive Data},
  pages =	{1--14},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8261},
  editor =	{Stefan B\"{o}ttcher and Markus Lohrey and Sebastian Maneth and Wojcieh Rytter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.08261.10},
  URN =		{urn:nbn:de:0030-drops-16743},
  doi =		{10.4230/DagSemProc.08261.10},
  annote =	{Keywords: Pattern matching, text indexing, compressed data structures, comparative genomics}
}
Document
SXSAQCT and XSAQCT: XML Queryable Compressors

Authors: Tomasz Müldner, Christopher Fry, Jan Krzysztof Miziolek, and Scott Durno


Abstract
Recently, there has been a growing interest in queryable XML compressors, which can be used to query compressed data with minimal decompression, or even without any decompression. At the same time, there are very few such projects, which have been made available for testing and comparisons. In this paper, we report our current work on two novel queryable XML compressors; a schema-based compressor, SXSAQCT, and a schema-free compressor, XSAQCT. While the work on both compressors is in its early stage, our experiments (reported here) show that our approach may be successfully competing with other known queryable compressors.

Cite as

Tomasz Müldner, Christopher Fry, Jan Krzysztof Miziolek, and Scott Durno. SXSAQCT and XSAQCT: XML Queryable Compressors. In Structure-Based Compression of Complex Massive Data. Dagstuhl Seminar Proceedings, Volume 8261, pp. 1-27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{muldner_et_al:DagSemProc.08261.11,
  author =	{M\"{u}ldner, Tomasz and Fry, Christopher and Miziolek, Jan Krzysztof and Durno, Scott},
  title =	{{SXSAQCT and XSAQCT: XML Queryable Compressors}},
  booktitle =	{Structure-Based Compression of Complex Massive Data},
  pages =	{1--27},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8261},
  editor =	{Stefan B\"{o}ttcher and Markus Lohrey and Sebastian Maneth and Wojcieh Rytter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.08261.11},
  URN =		{urn:nbn:de:0030-drops-16738},
  doi =		{10.4230/DagSemProc.08261.11},
  annote =	{Keywords: XML compression, queryable}
}
Document
The XQueC Project: Compressing and Querying XML

Authors: Andrei Arion, Angela Bonifati, Ioana Manolescu, and Andrea Pugliese


Abstract
We outline in this paper the main contributions of the XQueC project. XQueC, namely XQuery processor and Compressor, is the first compression tool to seamlessly allow XQuery queries in the compressed domain. It includes a set of data structures, that basically shred the XML document into suitable chunks linked to each other, thus disagreeing with the ’homomorphic’ principle so far adopted in previous XML compressors. According to this principle, the compressed document is homomorphic to the original document. Moreover, in order to avoid the time consumption due to compressing and decompressing intermediate query results, XQueC applies ‘lazy’ decompression by issuing the queries directly in the compressed domain.

Cite as

Andrei Arion, Angela Bonifati, Ioana Manolescu, and Andrea Pugliese. The XQueC Project: Compressing and Querying XML. In Structure-Based Compression of Complex Massive Data. Dagstuhl Seminar Proceedings, Volume 8261, pp. 1-16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2008)


Copy BibTex To Clipboard

@InProceedings{arion_et_al:DagSemProc.08261.12,
  author =	{Arion, Andrei and Bonifati, Angela and Manolescu, Ioana and Pugliese, Andrea},
  title =	{{The XQueC Project: Compressing and Querying XML}},
  booktitle =	{Structure-Based Compression of Complex Massive Data},
  pages =	{1--16},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2008},
  volume =	{8261},
  editor =	{Stefan B\"{o}ttcher and Markus Lohrey and Sebastian Maneth and Wojcieh Rytter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagSemProc.08261.12},
  URN =		{urn:nbn:de:0030-drops-16919},
  doi =		{10.4230/DagSemProc.08261.12},
  annote =	{Keywords: XML compression, Data structures, XQuery querying}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail