Transactions on Graph Data and Knowledge, Volume 2, Issue 2

TGDK, Volume 2, Issue 2



Thumbnail PDF

Special Issue

Special Issue on Resources for Graph Data and Knowledge

Editors

Aidan Hogan
  • DCC, Universidad de Chile, IMFD, Chile
Ian Horrocks
  • University of Oxford, U.K.
Andreas Hotho
  • University of Würzburg, Germany
Lalana Kagal
  • Massachusetts Institute of Technology, Cambridge, MA, USA
Uli Sattler
  • University of Manchester, U.K.

Publication Details

  • published at: 2024-12-18
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Issue
TGDK, Volume 2, Issue 2, Complete Issue

Abstract
TGDK, Volume 2, Issue 2, Complete Issue

Cite as

Transactions on Graph Data and Knowledge (TGDK), Volume 2, Issue 2: Special Issue on Resources for Graph Data and Knowledge, pp. 1-200, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{TGDK.2.2,
  title =	{{TGDK, Volume 2, Issue 2, Complete Issue}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{1--200},
  ISSN =	{2942-7517},
  year =	{2024},
  volume =	{2},
  number =	{2},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.2.2},
  URN =		{urn:nbn:de:0030-drops-226267},
  doi =		{10.4230/TGDK.2.2},
  annote =	{Keywords: TGDK, Volume 2, Issue 2, Complete Issue}
}
Document
Front Matter
Front Matter, Table of Contents, List of Authors

Abstract
Front Matter, Table of Contents, List of Authors

Cite as

Transactions on Graph Data and Knowledge (TGDK), Volume 2, Issue 2: Special Issue on Resources for Graph Data and Knowledge, pp. 0:i-0:viii, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{TGDK.2.2.0,
  title =	{{Front Matter, Table of Contents, List of Authors}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{0:i--0:viii},
  ISSN =	{2942-7517},
  year =	{2024},
  volume =	{2},
  number =	{2},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.2.2.0},
  URN =		{urn:nbn:de:0030-drops-226246},
  doi =		{10.4230/TGDK.2.2.0},
  annote =	{Keywords: Front Matter, Table of Contents, List of Authors}
}
Document
Preface
Resources for Graph Data and Knowledge

Authors: Aidan Hogan, Ian Horrocks, Andreas Hotho, Lalana Kagal, and Uli Sattler


Abstract
In this Special Issue of Transactions on Graph Data and Knowledge - entitled "Resources for Graph Data and Knowledge" - we present eight articles that describe key resources in the area. These resources cover a wide range of topics within the scope of the journal, including graph querying, graph learning, information extraction, and ontologies, addressing applications of knowledge graphs involving art, bibliographical metadata, research reproducibility, and transport networks.

Cite as

Aidan Hogan, Ian Horrocks, Andreas Hotho, Lalana Kagal, and Uli Sattler. Resources for Graph Data and Knowledge. In Special Issue on Resources for Graph Data and Knowledge. Transactions on Graph Data and Knowledge (TGDK), Volume 2, Issue 2, pp. 1:1-1:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{hogan_et_al:TGDK.2.2.1,
  author =	{Hogan, Aidan and Horrocks, Ian and Hotho, Andreas and Kagal, Lalana and Sattler, Uli},
  title =	{{Resources for Graph Data and Knowledge}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{1:1--1:2},
  ISSN =	{2942-7517},
  year =	{2024},
  volume =	{2},
  number =	{2},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.2.2.1},
  URN =		{urn:nbn:de:0030-drops-225851},
  doi =		{10.4230/TGDK.2.2.1},
  annote =	{Keywords: Graphs, Data, Knowledge}
}
Document
Resource Paper
NEOntometrics - A Public Endpoint for Calculating Ontology Metrics

Authors: Achim Reiz and Kurt Sandkuhl


Abstract
Ontologies are the cornerstone of the semantic web and knowledge graphs. They are available from various sources, come in many shapes and sizes, and differ widely in attributes like expressivity, degree of interconnection, or the number of individuals. As sharing knowledge and meaning across human and computational actors emphasizes the reuse of existing ontologies, how can we select the ontology that best fits the individual use case? How do we compare two ontologies or assess their different versions? Automatically calculated ontology metrics offer a starting point for an objective assessment. In the past years, a multitude of metrics have been proposed. However, metric implementations and validations for real-world data are scarce. For most of these proposed metrics, no software for their calculation is available (anymore). This work aims at solving this implementation gap. We present the emerging resource NEOntometrics, an open-source, flexible metric endpoint that offers (1.) an explorative help page that assists in understanding and selecting ontology metrics, (2.) a public metric calculation service that allows assessing ontologies from online resources, including GIT-based repositories for calculating evolutional data, with (3.) a scalable and adaptable architecture. In this paper, we first evaluate the state of the art, then show the software and its underlying architecture, followed by an evaluation. NEOntometrics is today the most extensive software for calculating ontology metrics.

Cite as

Achim Reiz and Kurt Sandkuhl. NEOntometrics - A Public Endpoint for Calculating Ontology Metrics. In Special Issue on Resources for Graph Data and Knowledge. Transactions on Graph Data and Knowledge (TGDK), Volume 2, Issue 2, pp. 2:1-2:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{reiz_et_al:TGDK.2.2.2,
  author =	{Reiz, Achim and Sandkuhl, Kurt},
  title =	{{NEOntometrics - A Public Endpoint for Calculating Ontology Metrics}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{2:1--2:22},
  ISSN =	{2942-7517},
  year =	{2024},
  volume =	{2},
  number =	{2},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.2.2.2},
  URN =		{urn:nbn:de:0030-drops-225869},
  doi =		{10.4230/TGDK.2.2.2},
  annote =	{Keywords: Ontology Metrics, Ontology Quality, Knowledge Graph Semantic Web, OWL, RDF}
}
Document
Resource Paper
The dblp Knowledge Graph and SPARQL Endpoint

Authors: Marcel R. Ackermann, Hannah Bast, Benedikt Maria Beckermann, Johannes Kalmbach, Patrick Neises, and Stefan Ollinger


Abstract
For more than 30 years, the dblp computer science bibliography has provided quality-checked and curated bibliographic metadata on major computer science journals, proceedings, and monographs. Its semantic content has been published as RDF or similar graph data by third parties in the past, but most of these resources have now disappeared from the web or are no longer actively synchronized with the latest dblp data. In this article, we introduce the dblp Knowledge Graph (dblp KG), the first semantic representation of the dblp data that is designed and maintained by the dblp team. The dataset is augmented by citation data from the OpenCitations corpus. Open and FAIR access to the data is provided via daily updated RDF dumps, persistently archived monthly releases, a new public SPARQL endpoint with a powerful user interface, and a linked open data API. We also make it easy to self-host a replica of our SPARQL endpoint. We provide an introduction on how to work with the dblp KG and the added citation data using our SPARQL endpoint, with several example queries. Finally, we present the results of a small performance evaluation.

Cite as

Marcel R. Ackermann, Hannah Bast, Benedikt Maria Beckermann, Johannes Kalmbach, Patrick Neises, and Stefan Ollinger. The dblp Knowledge Graph and SPARQL Endpoint. In Special Issue on Resources for Graph Data and Knowledge. Transactions on Graph Data and Knowledge (TGDK), Volume 2, Issue 2, pp. 3:1-3:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{ackermann_et_al:TGDK.2.2.3,
  author =	{Ackermann, Marcel R. and Bast, Hannah and Beckermann, Benedikt Maria and Kalmbach, Johannes and Neises, Patrick and Ollinger, Stefan},
  title =	{{The dblp Knowledge Graph and SPARQL Endpoint}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{3:1--3:23},
  ISSN =	{2942-7517},
  year =	{2024},
  volume =	{2},
  number =	{2},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.2.2.3},
  URN =		{urn:nbn:de:0030-drops-225870},
  doi =		{10.4230/TGDK.2.2.3},
  annote =	{Keywords: dblp, Scholarly Knowledge Graph, Resource, RDF, SPARQL}
}
Document
Resource Paper
FAIR Jupyter: A Knowledge Graph Approach to Semantic Sharing and Granular Exploration of a Computational Notebook Reproducibility Dataset

Authors: Sheeba Samuel and Daniel Mietchen


Abstract
The way in which data are shared can affect their utility and reusability. Here, we demonstrate how data that we had previously shared in bulk can be mobilized further through a knowledge graph that allows for much more granular exploration and interrogation. The original dataset is about the computational reproducibility of GitHub-hosted Jupyter notebooks associated with biomedical publications. It contains rich metadata about the publications, associated GitHub repositories and Jupyter notebooks, and the notebooks' reproducibility. We took this dataset, converted it into semantic triples and loaded these into a triple store to create a knowledge graph - FAIR Jupyter - that we made accessible via a web service. This enables granular data exploration and analysis through queries that can be tailored to specific use cases. Such queries may provide details about any of the variables from the original dataset, highlight relationships between them or combine some of the graph’s content with materials from corresponding external resources. We provide a collection of example queries addressing a range of use cases in research and education. We also outline how sets of such queries can be used to profile specific content types, either individually or by class. We conclude by discussing how such a semantically enhanced sharing of complex datasets can both enhance their FAIRness - i.e., their findability, accessibility, interoperability, and reusability - and help identify and communicate best practices, particularly with regards to data quality, standardization, automation and reproducibility.

Cite as

Sheeba Samuel and Daniel Mietchen. FAIR Jupyter: A Knowledge Graph Approach to Semantic Sharing and Granular Exploration of a Computational Notebook Reproducibility Dataset. In Special Issue on Resources for Graph Data and Knowledge. Transactions on Graph Data and Knowledge (TGDK), Volume 2, Issue 2, pp. 4:1-4:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{samuel_et_al:TGDK.2.2.4,
  author =	{Samuel, Sheeba and Mietchen, Daniel},
  title =	{{FAIR Jupyter: A Knowledge Graph Approach to Semantic Sharing and Granular Exploration of a Computational Notebook Reproducibility Dataset}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{4:1--4:24},
  ISSN =	{2942-7517},
  year =	{2024},
  volume =	{2},
  number =	{2},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.2.2.4},
  URN =		{urn:nbn:de:0030-drops-225886},
  doi =		{10.4230/TGDK.2.2.4},
  annote =	{Keywords: Knowledge Graph, Computational reproducibility, Jupyter notebooks, FAIR data, PubMed Central, GitHub, Python, SPARQL}
}
Document
Resource Paper
The Reasonable Ontology Templates Framework

Authors: Martin Georg Skjæveland and Leif Harald Karlsen


Abstract
Reasonable Ontology Templates (OTTR) is a templating language for representing and instantiating patterns. It is based on simple and generic, but powerful, mechanisms such as recursive macro expansion, term substitution and type systems, and is designed particularly for building and maintaining RDF knowledge graphs and OWL ontologies. In this resource paper, we present the formal specifications that define the OTTR framework. This includes the fundamentals of the OTTR language and the adaptions to make it fit with standard semantic web languages, and two serialization formats developed for semantic web practitioners. We also present the OTTR framework’s support for documenting, publishing and managing template libraries, and for tools for practical bulk instantiation of templates from tabular data and queryable data sources. The functionality of the OTTR framework is available for use through Lutra, an open-source reference implementation, and other independent implementations. We report on the use and impact of OTTR by presenting selected industrial use cases. Finally, we reflect on some design considerations of the language and framework and present ideas for future work.

Cite as

Martin Georg Skjæveland and Leif Harald Karlsen. The Reasonable Ontology Templates Framework. In Special Issue on Resources for Graph Data and Knowledge. Transactions on Graph Data and Knowledge (TGDK), Volume 2, Issue 2, pp. 5:1-5:54, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{skjaeveland_et_al:TGDK.2.2.5,
  author =	{Skj{\ae}veland, Martin Georg and Karlsen, Leif Harald},
  title =	{{The Reasonable Ontology Templates Framework}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{5:1--5:54},
  ISSN =	{2942-7517},
  year =	{2024},
  volume =	{2},
  number =	{2},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.2.2.5},
  URN =		{urn:nbn:de:0030-drops-225896},
  doi =		{10.4230/TGDK.2.2.5},
  annote =	{Keywords: Ontology engineering, Ontology design patterns, Template mechanism, Macros}
}
Document
Resource Paper
TØIRoads: A Road Data Model Generation Tool

Authors: Grunde Haraldsson Wesenberg and Ana Ozaki


Abstract
We describe road data models which can represent high level features of a road network such as population, points of interest, and road length/cost and capacity, while abstracting from time and geographic location. Such abstraction allows for a simplified traffic usage and congestion analysis that focus on the high level features. We provide theoretical results regarding mass conservation and sufficient conditions for avoiding congestion within the model. We describe a road data model generation tool, which we call "TØI Roads". We also describe several parameters that can be specified by a TØI Roads user to create graph data that can serve as input for training graph neural networks (or another learning approach that receives graph data as input) for predicting congestion within the model. The road data model generation tool allows, for instance, the study of the effects of population growth and how changes in road capacity can mitigate traffic congestion.

Cite as

Grunde Haraldsson Wesenberg and Ana Ozaki. TØIRoads: A Road Data Model Generation Tool. In Special Issue on Resources for Graph Data and Knowledge. Transactions on Graph Data and Knowledge (TGDK), Volume 2, Issue 2, pp. 6:1-6:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{wesenberg_et_al:TGDK.2.2.6,
  author =	{Wesenberg, Grunde Haraldsson and Ozaki, Ana},
  title =	{{T{\O}IRoads: A Road Data Model Generation Tool}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{6:1--6:12},
  ISSN =	{2942-7517},
  year =	{2024},
  volume =	{2},
  number =	{2},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.2.2.6},
  URN =		{urn:nbn:de:0030-drops-225901},
  doi =		{10.4230/TGDK.2.2.6},
  annote =	{Keywords: Road Data, Transportation, Graph Neural Networks, Synthetic Dataset Generation}
}
Document
Resource Paper
Whelk: An OWL EL+RL Reasoner Enabling New Use Cases

Authors: James P. Balhoff and Christopher J. Mungall


Abstract
Many tasks in the biosciences rely on reasoning with large OWL terminologies (Tboxes), often combined with even larger databases. In particular, a common task is retrieval queries that utilize relational expressions; for example, “find all genes expressed in the brain or any part of the brain”. Automated reasoning on these ontologies typically relies on scalable reasoners targeting the EL subset of OWL, such as ELK. While the introduction of ELK has been transformative in the incorporation of reasoning into bio-ontology quality control and production pipelines, we have encountered limitations when applying it to use cases involving high throughput query answering or reasoning about datasets describing instances (Aboxes). Whelk is a fast OWL reasoner for combined EL+RL reasoning. As such, it is particularly useful for many biological ontology tasks, particularly those characterized by large Tboxes using the EL subset of OWL, combined with Aboxes targeting the RL subset of OWL. Whelk is implemented in Scala and utilizes immutable functional data structures, which provides advantages when performing incremental or dynamic reasoning tasks. Whelk supports querying complex class expressions at a substantially greater rate than ELK, and can answer queries or perform incremental reasoning tasks in parallel, enabling novel applications of OWL reasoning.

Cite as

James P. Balhoff and Christopher J. Mungall. Whelk: An OWL EL+RL Reasoner Enabling New Use Cases. In Special Issue on Resources for Graph Data and Knowledge. Transactions on Graph Data and Knowledge (TGDK), Volume 2, Issue 2, pp. 7:1-7:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{balhoff_et_al:TGDK.2.2.7,
  author =	{Balhoff, James P. and Mungall, Christopher J.},
  title =	{{Whelk: An OWL EL+RL Reasoner Enabling New Use Cases}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{7:1--7:17},
  ISSN =	{2942-7517},
  year =	{2024},
  volume =	{2},
  number =	{2},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.2.2.7},
  URN =		{urn:nbn:de:0030-drops-225918},
  doi =		{10.4230/TGDK.2.2.7},
  annote =	{Keywords: Web Ontology Language, OWL, Semantic Web, ontology, reasoner}
}
Document
Resource Paper
MELArt: A Multimodal Entity Linking Dataset for Art

Authors: Alejandro Sierra-Múnera, Linh Le, Gianluca Demartini, and Ralf Krestel


Abstract
Traditional named entity linking (NEL) tools have largely employed a general-domain approach, spanning across various entity types such as persons, organizations, locations, and events in a multitude of contexts. While multimodal entity linking datasets exist (e.g., disambiguation of person names with the help of photographs), there is a need to develop domain-specific resources that represent the unique challenges present in domains like cultural heritage (e.g., stylistic changes through time, diversity of social and political context). To address this gap, our work presents a novel multimodal entity linking benchmark dataset for the art domain together with a comprehensive experimental evaluation of existing NEL methods on this new dataset. The dataset encapsulates various entities unique to the art domain. During the dataset creation process, we also adopt manual human evaluation, providing high-quality labels for our dataset. We introduce an automated process that facilitates the generation of this art dataset, harnessing data from multiple sources (Artpedia, Wikidata and Wikimedia Commons) to ensure its reliability and comprehensiveness. Furthermore, our paper delineates best practices for the integration of art datasets, and presents a detailed performance analysis of general-domain entity linking systems, when applied to domain-specific datasets. Through our research, we aim to address the lack of datasets for NEL in the art domain, providing resources for the development of new, more nuanced, and contextually rich entity linking methods in the realm of art and cultural heritage.

Cite as

Alejandro Sierra-Múnera, Linh Le, Gianluca Demartini, and Ralf Krestel. MELArt: A Multimodal Entity Linking Dataset for Art. In Special Issue on Resources for Graph Data and Knowledge. Transactions on Graph Data and Knowledge (TGDK), Volume 2, Issue 2, pp. 8:1-8:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{sierramunera_et_al:TGDK.2.2.8,
  author =	{Sierra-M\'{u}nera, Alejandro and Le, Linh and Demartini, Gianluca and Krestel, Ralf},
  title =	{{MELArt: A Multimodal Entity Linking Dataset for Art}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{8:1--8:22},
  ISSN =	{2942-7517},
  year =	{2024},
  volume =	{2},
  number =	{2},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.2.2.8},
  URN =		{urn:nbn:de:0030-drops-225921},
  doi =		{10.4230/TGDK.2.2.8},
  annote =	{Keywords: A Multimodal Entity Linking Dataset, Named Entity Linking, Art Domain, Wikidata, Wikimedia, Artpedia}
}
Document
Resource Paper
Horned-OWL: Flying Further and Faster with Ontologies

Authors: Phillip Lord, Björn Gehrke, Martin Larralde, Janna Hastings, Filippo De Bortoli, James A. Overton, James P. Balhoff, and Jennifer Warrender


Abstract
Horned-OWL is a library implementing the OWL2 specification in the Rust language. As a library, it is aimed at processes and manipulation of ontologies, rather than supporting GUI development; this is reflected heavily in its design, which is for performance and pluggability; it builds on the Rust idiom, treating an ontology as a standard Rust collection, meaning it can take direct advantage of the data manipulation capabilities of the Rust standard library. The core library consists of a data model implementation as well as an IO framework supporting many common formats for OWL: RDF, XML and the OWL functional syntax; there is an extensive test library to ensure compliance to the specification. In addition to the core library, Horned-OWL now supports a growing ecosystem: the py-horned-owl library provides a Python front-end for Horned-OWL, ideal for scripting ontology manipulation; whelk-rs provides reasoning services; and horned-bin provides a number of command line tools. The library itself is now mature, supporting the entire OWL2 specification, in addition to SWRL rules, and the ecosystem is emerging into one of the most extensive for manipulation of OWL ontologies.

Cite as

Phillip Lord, Björn Gehrke, Martin Larralde, Janna Hastings, Filippo De Bortoli, James A. Overton, James P. Balhoff, and Jennifer Warrender. Horned-OWL: Flying Further and Faster with Ontologies. In Special Issue on Resources for Graph Data and Knowledge. Transactions on Graph Data and Knowledge (TGDK), Volume 2, Issue 2, pp. 9:1-9:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{lord_et_al:TGDK.2.2.9,
  author =	{Lord, Phillip and Gehrke, Bj\"{o}rn and Larralde, Martin and Hastings, Janna and De Bortoli, Filippo and Overton, James A. and Balhoff, James P. and Warrender, Jennifer},
  title =	{{Horned-OWL: Flying Further and Faster with Ontologies}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{9:1--9:14},
  ISSN =	{2942-7517},
  year =	{2024},
  volume =	{2},
  number =	{2},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.2.2.9},
  URN =		{urn:nbn:de:0030-drops-225932},
  doi =		{10.4230/TGDK.2.2.9},
  annote =	{Keywords: Web Ontology Language, OWL, Semantic Web}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail