33 Search Results for "TGDK"

Document/HTML   ×
Document
Research
CoaKG: A Contextualized Knowledge Graph Approach for Exploratory Search and Decision Making

Authors: Veronica dos Santos, Daniel Schwabe, Altigran Soares da Silva, and Sérgio Lifschitz

Published in: TGDK, Volume 3, Issue 1 (2025). Transactions on Graph Data and Knowledge, Volume 3, Issue 1


Abstract
In decision-making scenarios, an information need arises due to a knowledge gap when a decision-maker needs more knowledge to make a decision. Users may take the initiative to acquire knowledge to fill this gap through exploratory search approaches using Knowledge Graphs (KGs) as information sources, but their queries can be incomplete, inaccurate, and ambiguous. Although KGs have great potential for exploratory search, they are incomplete by nature. Besides, for both Crowd-sourced KGs and KGs constructed by integrating several different information sources of varying quality to be effectively consumed, there is a need for a Trust Layer. Our research aims to enrich and allow querying KGs to support context-aware exploration in decision-making scenarios. We propose a layered architecture for Context Augmented Knowledge Graphs-based Decision Support Systems with a Knowledge Layer that operates under a Dual Open World Assumption (DOWA). Under DOWA, the evaluation of the truthfulness of the information obtained from KGs depends on the context of its claims and the tasks carried out or intended (purpose). The Knowledge Layer comprises a Context Augmented KG (CoaKG) and a CoaKG Query Engine. The CoaKG contains contextual mappings to identify explicit context and rules to infer implicit context. The CoaKG Query Engine is designed as a query-answering approach that retrieves all contextualized answers from the CoaKG. A Proof of Concept (PoC) based on Wikidata was developed to evaluate the effectiveness of the Knowledge Layer.

Cite as

Veronica dos Santos, Daniel Schwabe, Altigran Soares da Silva, and Sérgio Lifschitz. CoaKG: A Contextualized Knowledge Graph Approach for Exploratory Search and Decision Making. In Transactions on Graph Data and Knowledge (TGDK), Volume 3, Issue 1, pp. 4:1-4:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Article{dossantos_et_al:TGDK.3.1.4,
  author =	{dos Santos, Veronica and Schwabe, Daniel and da Silva, Altigran Soares and Lifschitz, S\'{e}rgio},
  title =	{{CoaKG: A Contextualized Knowledge Graph Approach for Exploratory Search and Decision Making}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{4:1--4:27},
  ISSN =	{2942-7517},
  year =	{2025},
  volume =	{3},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.3.1.4},
  URN =		{urn:nbn:de:0030-drops-236685},
  doi =		{10.4230/TGDK.3.1.4},
  annote =	{Keywords: Knowledge Graphs, Context Search, Decision Support}
}
Document
Survey
Uncertainty Management in the Construction of Knowledge Graphs: A Survey

Authors: Lucas Jarnac, Yoan Chabot, and Miguel Couceiro

Published in: TGDK, Volume 3, Issue 1 (2025). Transactions on Graph Data and Knowledge, Volume 3, Issue 1


Abstract
Knowledge Graphs (KGs) are a major asset for companies thanks to their great flexibility in data representation and their numerous applications, e.g., vocabulary sharing, Q&A or recommendation systems. To build a KG, it is a common practice to rely on automatic methods for extracting knowledge from various heterogeneous sources. However, in a noisy and uncertain world, knowledge may not be reliable and conflicts between data sources may occur. Integrating unreliable data would directly impact the use of the KG, therefore such conflicts must be resolved. This could be done manually by selecting the best data to integrate. This first approach is highly accurate, but costly and time-consuming. That is why recent efforts focus on automatic approaches, which represent a challenging task since it requires handling the uncertainty of extracted knowledge throughout its integration into the KG. We survey state-of-the-art approaches in this direction and present constructions of both open and enterprise KGs. We then describe different knowledge extraction methods and discuss downstream tasks after knowledge acquisition, including KG completion using embedding models, knowledge alignment, and knowledge fusion in order to address the problem of knowledge uncertainty in KG construction. We conclude with a discussion on the remaining challenges and perspectives when constructing a KG taking into account uncertainty.

Cite as

Lucas Jarnac, Yoan Chabot, and Miguel Couceiro. Uncertainty Management in the Construction of Knowledge Graphs: A Survey. In Transactions on Graph Data and Knowledge (TGDK), Volume 3, Issue 1, pp. 3:1-3:48, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Article{jarnac_et_al:TGDK.3.1.3,
  author =	{Jarnac, Lucas and Chabot, Yoan and Couceiro, Miguel},
  title =	{{Uncertainty Management in the Construction of Knowledge Graphs: A Survey}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{3:1--3:48},
  ISSN =	{2942-7517},
  year =	{2025},
  volume =	{3},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.3.1.3},
  URN =		{urn:nbn:de:0030-drops-233733},
  doi =		{10.4230/TGDK.3.1.3},
  annote =	{Keywords: Knowledge reconciliation, Uncertainty, Heterogeneous sources, Knowledge graph construction}
}
Document
Resource
KG2Tables: A Domain-Specific Tabular Data Generator to Evaluate Semantic Table Interpretation Systems

Authors: Nora Abdelmageed, Ernesto Jiménez-Ruiz, Oktie Hassanzadeh, and Birgitta König-Ries

Published in: TGDK, Volume 3, Issue 1 (2025). Transactions on Graph Data and Knowledge, Volume 3, Issue 1


Abstract
Tabular data, often in the form of CSV files, plays a pivotal role in data analytics pipelines. Understanding this data semantically, known as Semantic Table Interpretation (STI), is crucial but poses challenges due to several factors such as the ambiguity of labels. As a result, STI has gained increasing attention from the community in the past few years. Evaluating STI systems requires well-established benchmarks. Most of the existing large-scale benchmarks are derived from general domain sources and focus on ambiguity, while domain-specific benchmarks are relatively small in size. This paper introduces KG2Tables, a framework that can construct domain-specific large-scale benchmarks from a Knowledge Graph (KG). KG2Tables leverages the internal hierarchy of the relevant KG concepts and their properties. As a proof of concept, we have built large datasets in the food, biodiversity, and biomedical domains. The resulting datasets, tFood, tBiomed, and tBiodiv, have been made available for the public in the ISWC SemTab challenge (2023 and 2024 editions). We include the evaluation results of top-performing STI systems using tFood Such results underscore its potential as a robust evaluation benchmark for challenging STI systems. We demonstrate the data quality level using a sample-based approach for the generated benchmarks including, for example, realistic tables assessment. Nevertheless, we provide an extensive discussion of KG2Tables explaining how it could be used to create other benchmarks from any domain of interest and including its key features and limitations with suggestions to overcome them.

Cite as

Nora Abdelmageed, Ernesto Jiménez-Ruiz, Oktie Hassanzadeh, and Birgitta König-Ries. KG2Tables: A Domain-Specific Tabular Data Generator to Evaluate Semantic Table Interpretation Systems. In Transactions on Graph Data and Knowledge (TGDK), Volume 3, Issue 1, pp. 1:1-1:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Article{abdelmageed_et_al:TGDK.3.1.1,
  author =	{Abdelmageed, Nora and Jim\'{e}nez-Ruiz, Ernesto and Hassanzadeh, Oktie and K\"{o}nig-Ries, Birgitta},
  title =	{{KG2Tables: A Domain-Specific Tabular Data Generator to Evaluate Semantic Table Interpretation Systems}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{1:1--1:28},
  ISSN =	{2942-7517},
  year =	{2025},
  volume =	{3},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.3.1.1},
  URN =		{urn:nbn:de:0030-drops-230104},
  doi =		{10.4230/TGDK.3.1.1},
  annote =	{Keywords: Semantic Table Interpretation (STI), Knowledge Graph (KG), STI Benchmark, Food, Biodiversity, Biomedical}
}
Document
Research
Talking Wikidata: Communication Patterns and Their Impact on Community Engagement in Collaborative Knowledge Graphs

Authors: Elisavet Koutsiana, Ioannis Reklos, Kholoud Saad Alghamdi, Nitisha Jain, Albert Meroño-Peñuela, and Elena Simperl

Published in: TGDK, Volume 3, Issue 1 (2025). Transactions on Graph Data and Knowledge, Volume 3, Issue 1


Abstract
We study collaboration patterns of Wikidata, one of the world's largest open source collaborative knowledge graph (KG) communities. Collaborative KG communities, play a key role in structuring machine-readable knowledge to support AI systems like conversational agents. However, these communities face challenges related to long-term member engagement, as a small subset of contributors often is responsible for the majority of contributions and decision-making. While prior research has explored contributors' roles and lifespans, discussions within collaborative KG communities remain understudied. To fill this gap, we investigated the behavioural patterns of contributors and factors affecting their communication and participation. We analysed all the discussions on Wikidata using a mixed methods approach, including statistical tests, network analysis, and text and graph embedding representations. Our findings reveal that the interactions between Wikidata editors form a small world network, resilient to dropouts and inclusive, where both the network topology and discussion content influence the continuity of conversations. Furthermore, the account age of Wikidata members and their conversations are significant factors in their long-term engagement with the project. Our observations and recommendations can benefit the Wikidata and semantic web communities, providing guidance on how to improve collaborative environments for sustainability, growth, and quality.

Cite as

Elisavet Koutsiana, Ioannis Reklos, Kholoud Saad Alghamdi, Nitisha Jain, Albert Meroño-Peñuela, and Elena Simperl. Talking Wikidata: Communication Patterns and Their Impact on Community Engagement in Collaborative Knowledge Graphs. In Transactions on Graph Data and Knowledge (TGDK), Volume 3, Issue 1, pp. 2:1-2:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Article{koutsiana_et_al:TGDK.3.1.2,
  author =	{Koutsiana, Elisavet and Reklos, Ioannis and Alghamdi, Kholoud Saad and Jain, Nitisha and Mero\~{n}o-Pe\~{n}uela, Albert and Simperl, Elena},
  title =	{{Talking Wikidata: Communication Patterns and Their Impact on Community Engagement in Collaborative Knowledge Graphs}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{2:1--2:27},
  ISSN =	{2942-7517},
  year =	{2025},
  volume =	{3},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.3.1.2},
  URN =		{urn:nbn:de:0030-drops-230114},
  doi =		{10.4230/TGDK.3.1.2},
  annote =	{Keywords: collaborative knowledge graph, network analysis, graph embeddings, text embeddings}
}
Document
Resource Paper
NEOntometrics - A Public Endpoint for Calculating Ontology Metrics

Authors: Achim Reiz and Kurt Sandkuhl

Published in: TGDK, Volume 2, Issue 2 (2024): Special Issue on Resources for Graph Data and Knowledge. Transactions on Graph Data and Knowledge, Volume 2, Issue 2


Abstract
Ontologies are the cornerstone of the semantic web and knowledge graphs. They are available from various sources, come in many shapes and sizes, and differ widely in attributes like expressivity, degree of interconnection, or the number of individuals. As sharing knowledge and meaning across human and computational actors emphasizes the reuse of existing ontologies, how can we select the ontology that best fits the individual use case? How do we compare two ontologies or assess their different versions? Automatically calculated ontology metrics offer a starting point for an objective assessment. In the past years, a multitude of metrics have been proposed. However, metric implementations and validations for real-world data are scarce. For most of these proposed metrics, no software for their calculation is available (anymore). This work aims at solving this implementation gap. We present the emerging resource NEOntometrics, an open-source, flexible metric endpoint that offers (1.) an explorative help page that assists in understanding and selecting ontology metrics, (2.) a public metric calculation service that allows assessing ontologies from online resources, including GIT-based repositories for calculating evolutional data, with (3.) a scalable and adaptable architecture. In this paper, we first evaluate the state of the art, then show the software and its underlying architecture, followed by an evaluation. NEOntometrics is today the most extensive software for calculating ontology metrics.

Cite as

Achim Reiz and Kurt Sandkuhl. NEOntometrics - A Public Endpoint for Calculating Ontology Metrics. In Special Issue on Resources for Graph Data and Knowledge. Transactions on Graph Data and Knowledge (TGDK), Volume 2, Issue 2, pp. 2:1-2:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{reiz_et_al:TGDK.2.2.2,
  author =	{Reiz, Achim and Sandkuhl, Kurt},
  title =	{{NEOntometrics - A Public Endpoint for Calculating Ontology Metrics}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{2:1--2:22},
  ISSN =	{2942-7517},
  year =	{2024},
  volume =	{2},
  number =	{2},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.2.2.2},
  URN =		{urn:nbn:de:0030-drops-225869},
  doi =		{10.4230/TGDK.2.2.2},
  annote =	{Keywords: Ontology Metrics, Ontology Quality, Knowledge Graph Semantic Web, OWL, RDF}
}
Document
Preface
Resources for Graph Data and Knowledge

Authors: Aidan Hogan, Ian Horrocks, Andreas Hotho, Lalana Kagal, and Uli Sattler

Published in: TGDK, Volume 2, Issue 2 (2024): Special Issue on Resources for Graph Data and Knowledge. Transactions on Graph Data and Knowledge, Volume 2, Issue 2


Abstract
In this Special Issue of Transactions on Graph Data and Knowledge - entitled "Resources for Graph Data and Knowledge" - we present eight articles that describe key resources in the area. These resources cover a wide range of topics within the scope of the journal, including graph querying, graph learning, information extraction, and ontologies, addressing applications of knowledge graphs involving art, bibliographical metadata, research reproducibility, and transport networks.

Cite as

Aidan Hogan, Ian Horrocks, Andreas Hotho, Lalana Kagal, and Uli Sattler. Resources for Graph Data and Knowledge. In Special Issue on Resources for Graph Data and Knowledge. Transactions on Graph Data and Knowledge (TGDK), Volume 2, Issue 2, pp. 1:1-1:2, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{hogan_et_al:TGDK.2.2.1,
  author =	{Hogan, Aidan and Horrocks, Ian and Hotho, Andreas and Kagal, Lalana and Sattler, Uli},
  title =	{{Resources for Graph Data and Knowledge}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{1:1--1:2},
  ISSN =	{2942-7517},
  year =	{2024},
  volume =	{2},
  number =	{2},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.2.2.1},
  URN =		{urn:nbn:de:0030-drops-225851},
  doi =		{10.4230/TGDK.2.2.1},
  annote =	{Keywords: Graphs, Data, Knowledge}
}
Document
Resource Paper
FAIR Jupyter: A Knowledge Graph Approach to Semantic Sharing and Granular Exploration of a Computational Notebook Reproducibility Dataset

Authors: Sheeba Samuel and Daniel Mietchen

Published in: TGDK, Volume 2, Issue 2 (2024): Special Issue on Resources for Graph Data and Knowledge. Transactions on Graph Data and Knowledge, Volume 2, Issue 2


Abstract
The way in which data are shared can affect their utility and reusability. Here, we demonstrate how data that we had previously shared in bulk can be mobilized further through a knowledge graph that allows for much more granular exploration and interrogation. The original dataset is about the computational reproducibility of GitHub-hosted Jupyter notebooks associated with biomedical publications. It contains rich metadata about the publications, associated GitHub repositories and Jupyter notebooks, and the notebooks' reproducibility. We took this dataset, converted it into semantic triples and loaded these into a triple store to create a knowledge graph - FAIR Jupyter - that we made accessible via a web service. This enables granular data exploration and analysis through queries that can be tailored to specific use cases. Such queries may provide details about any of the variables from the original dataset, highlight relationships between them or combine some of the graph’s content with materials from corresponding external resources. We provide a collection of example queries addressing a range of use cases in research and education. We also outline how sets of such queries can be used to profile specific content types, either individually or by class. We conclude by discussing how such a semantically enhanced sharing of complex datasets can both enhance their FAIRness - i.e., their findability, accessibility, interoperability, and reusability - and help identify and communicate best practices, particularly with regards to data quality, standardization, automation and reproducibility.

Cite as

Sheeba Samuel and Daniel Mietchen. FAIR Jupyter: A Knowledge Graph Approach to Semantic Sharing and Granular Exploration of a Computational Notebook Reproducibility Dataset. In Special Issue on Resources for Graph Data and Knowledge. Transactions on Graph Data and Knowledge (TGDK), Volume 2, Issue 2, pp. 4:1-4:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{samuel_et_al:TGDK.2.2.4,
  author =	{Samuel, Sheeba and Mietchen, Daniel},
  title =	{{FAIR Jupyter: A Knowledge Graph Approach to Semantic Sharing and Granular Exploration of a Computational Notebook Reproducibility Dataset}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{4:1--4:24},
  ISSN =	{2942-7517},
  year =	{2024},
  volume =	{2},
  number =	{2},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.2.2.4},
  URN =		{urn:nbn:de:0030-drops-225886},
  doi =		{10.4230/TGDK.2.2.4},
  annote =	{Keywords: Knowledge Graph, Computational reproducibility, Jupyter notebooks, FAIR data, PubMed Central, GitHub, Python, SPARQL}
}
Document
Resource Paper
Whelk: An OWL EL+RL Reasoner Enabling New Use Cases

Authors: James P. Balhoff and Christopher J. Mungall

Published in: TGDK, Volume 2, Issue 2 (2024): Special Issue on Resources for Graph Data and Knowledge. Transactions on Graph Data and Knowledge, Volume 2, Issue 2


Abstract
Many tasks in the biosciences rely on reasoning with large OWL terminologies (Tboxes), often combined with even larger databases. In particular, a common task is retrieval queries that utilize relational expressions; for example, “find all genes expressed in the brain or any part of the brain”. Automated reasoning on these ontologies typically relies on scalable reasoners targeting the EL subset of OWL, such as ELK. While the introduction of ELK has been transformative in the incorporation of reasoning into bio-ontology quality control and production pipelines, we have encountered limitations when applying it to use cases involving high throughput query answering or reasoning about datasets describing instances (Aboxes). Whelk is a fast OWL reasoner for combined EL+RL reasoning. As such, it is particularly useful for many biological ontology tasks, particularly those characterized by large Tboxes using the EL subset of OWL, combined with Aboxes targeting the RL subset of OWL. Whelk is implemented in Scala and utilizes immutable functional data structures, which provides advantages when performing incremental or dynamic reasoning tasks. Whelk supports querying complex class expressions at a substantially greater rate than ELK, and can answer queries or perform incremental reasoning tasks in parallel, enabling novel applications of OWL reasoning.

Cite as

James P. Balhoff and Christopher J. Mungall. Whelk: An OWL EL+RL Reasoner Enabling New Use Cases. In Special Issue on Resources for Graph Data and Knowledge. Transactions on Graph Data and Knowledge (TGDK), Volume 2, Issue 2, pp. 7:1-7:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{balhoff_et_al:TGDK.2.2.7,
  author =	{Balhoff, James P. and Mungall, Christopher J.},
  title =	{{Whelk: An OWL EL+RL Reasoner Enabling New Use Cases}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{7:1--7:17},
  ISSN =	{2942-7517},
  year =	{2024},
  volume =	{2},
  number =	{2},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.2.2.7},
  URN =		{urn:nbn:de:0030-drops-225918},
  doi =		{10.4230/TGDK.2.2.7},
  annote =	{Keywords: Web Ontology Language, OWL, Semantic Web, ontology, reasoner}
}
Document
Unified Multimedia Segmentation - A Comprehensive Model for URI-based Media Segment Representation

Authors: Jan Willi, Abraham Bernstein, and Luca Rossetto

Published in: TGDK, Volume 2, Issue 3 (2024). Transactions on Graph Data and Knowledge, Volume 2, Issue 3


Abstract
In multimedia annotation, referencing specific segments of a document is often desired due to its richness and multimodality, but no universal representation for such references exists. This significantly hampers the usage of multimedia content in knowledge graphs, as it is modeled as one large atomic information container. Unstructured data - such as text, audio, images, and video - can commonly be decomposed into its constituent parts, as such documents rarely contain only one semantic concept. Hence, it is reasonable to assume that these advances will make it possible to decompose these previous atomic components into logical segments. To be processable by the knowledge graph stack, however, one needs to break the atomic nature of multimedia content, providing a mechanism to address media segments. This paper proposes a Unified Segmentation Model capable of depicting arbitrary segmentations on any media document type. The work begins with a formal analysis of multimedia and segmentation, exploring segmentation operations and how to describe them. Building on this analysis, it then develops a practical scheme for expressing segmentation in Uniform Resource Identifiers (URIs). Given that this approach makes segments of multimedia content referencable, it breaks their atomic nature and makes them first-class citizens within knowledge graphs. The proposed model is implemented as a proof of concept in the MediaGraph Store, a multimedia knowledge graph storage and querying engine.

Cite as

Jan Willi, Abraham Bernstein, and Luca Rossetto. Unified Multimedia Segmentation - A Comprehensive Model for URI-based Media Segment Representation. In Transactions on Graph Data and Knowledge (TGDK), Volume 2, Issue 3, pp. 1:1-1:34, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{willi_et_al:TGDK.2.3.1,
  author =	{Willi, Jan and Bernstein, Abraham and Rossetto, Luca},
  title =	{{Unified Multimedia Segmentation - A Comprehensive Model for URI-based Media Segment Representation}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{1:1--1:34},
  ISSN =	{2942-7517},
  year =	{2024},
  volume =	{2},
  number =	{3},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.2.3.1},
  URN =		{urn:nbn:de:0030-drops-225953},
  doi =		{10.4230/TGDK.2.3.1},
  annote =	{Keywords: Multimodal Knowledge Graphs, Multimedia Segmentation, Multimedia Representation}
}
Document
Resource Paper
The Reasonable Ontology Templates Framework

Authors: Martin Georg Skjæveland and Leif Harald Karlsen

Published in: TGDK, Volume 2, Issue 2 (2024): Special Issue on Resources for Graph Data and Knowledge. Transactions on Graph Data and Knowledge, Volume 2, Issue 2


Abstract
Reasonable Ontology Templates (OTTR) is a templating language for representing and instantiating patterns. It is based on simple and generic, but powerful, mechanisms such as recursive macro expansion, term substitution and type systems, and is designed particularly for building and maintaining RDF knowledge graphs and OWL ontologies. In this resource paper, we present the formal specifications that define the OTTR framework. This includes the fundamentals of the OTTR language and the adaptions to make it fit with standard semantic web languages, and two serialization formats developed for semantic web practitioners. We also present the OTTR framework’s support for documenting, publishing and managing template libraries, and for tools for practical bulk instantiation of templates from tabular data and queryable data sources. The functionality of the OTTR framework is available for use through Lutra, an open-source reference implementation, and other independent implementations. We report on the use and impact of OTTR by presenting selected industrial use cases. Finally, we reflect on some design considerations of the language and framework and present ideas for future work.

Cite as

Martin Georg Skjæveland and Leif Harald Karlsen. The Reasonable Ontology Templates Framework. In Special Issue on Resources for Graph Data and Knowledge. Transactions on Graph Data and Knowledge (TGDK), Volume 2, Issue 2, pp. 5:1-5:54, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{skjaeveland_et_al:TGDK.2.2.5,
  author =	{Skj{\ae}veland, Martin Georg and Karlsen, Leif Harald},
  title =	{{The Reasonable Ontology Templates Framework}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{5:1--5:54},
  ISSN =	{2942-7517},
  year =	{2024},
  volume =	{2},
  number =	{2},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.2.2.5},
  URN =		{urn:nbn:de:0030-drops-225896},
  doi =		{10.4230/TGDK.2.2.5},
  annote =	{Keywords: Ontology engineering, Ontology design patterns, Template mechanism, Macros}
}
Document
Resource Paper
The dblp Knowledge Graph and SPARQL Endpoint

Authors: Marcel R. Ackermann, Hannah Bast, Benedikt Maria Beckermann, Johannes Kalmbach, Patrick Neises, and Stefan Ollinger

Published in: TGDK, Volume 2, Issue 2 (2024): Special Issue on Resources for Graph Data and Knowledge. Transactions on Graph Data and Knowledge, Volume 2, Issue 2


Abstract
For more than 30 years, the dblp computer science bibliography has provided quality-checked and curated bibliographic metadata on major computer science journals, proceedings, and monographs. Its semantic content has been published as RDF or similar graph data by third parties in the past, but most of these resources have now disappeared from the web or are no longer actively synchronized with the latest dblp data. In this article, we introduce the dblp Knowledge Graph (dblp KG), the first semantic representation of the dblp data that is designed and maintained by the dblp team. The dataset is augmented by citation data from the OpenCitations corpus. Open and FAIR access to the data is provided via daily updated RDF dumps, persistently archived monthly releases, a new public SPARQL endpoint with a powerful user interface, and a linked open data API. We also make it easy to self-host a replica of our SPARQL endpoint. We provide an introduction on how to work with the dblp KG and the added citation data using our SPARQL endpoint, with several example queries. Finally, we present the results of a small performance evaluation.

Cite as

Marcel R. Ackermann, Hannah Bast, Benedikt Maria Beckermann, Johannes Kalmbach, Patrick Neises, and Stefan Ollinger. The dblp Knowledge Graph and SPARQL Endpoint. In Special Issue on Resources for Graph Data and Knowledge. Transactions on Graph Data and Knowledge (TGDK), Volume 2, Issue 2, pp. 3:1-3:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{ackermann_et_al:TGDK.2.2.3,
  author =	{Ackermann, Marcel R. and Bast, Hannah and Beckermann, Benedikt Maria and Kalmbach, Johannes and Neises, Patrick and Ollinger, Stefan},
  title =	{{The dblp Knowledge Graph and SPARQL Endpoint}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{3:1--3:23},
  ISSN =	{2942-7517},
  year =	{2024},
  volume =	{2},
  number =	{2},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.2.2.3},
  URN =		{urn:nbn:de:0030-drops-225870},
  doi =		{10.4230/TGDK.2.2.3},
  annote =	{Keywords: dblp, Scholarly Knowledge Graph, Resource, RDF, SPARQL}
}
Document
Resource Paper
MELArt: A Multimodal Entity Linking Dataset for Art

Authors: Alejandro Sierra-Múnera, Linh Le, Gianluca Demartini, and Ralf Krestel

Published in: TGDK, Volume 2, Issue 2 (2024): Special Issue on Resources for Graph Data and Knowledge. Transactions on Graph Data and Knowledge, Volume 2, Issue 2


Abstract
Traditional named entity linking (NEL) tools have largely employed a general-domain approach, spanning across various entity types such as persons, organizations, locations, and events in a multitude of contexts. While multimodal entity linking datasets exist (e.g., disambiguation of person names with the help of photographs), there is a need to develop domain-specific resources that represent the unique challenges present in domains like cultural heritage (e.g., stylistic changes through time, diversity of social and political context). To address this gap, our work presents a novel multimodal entity linking benchmark dataset for the art domain together with a comprehensive experimental evaluation of existing NEL methods on this new dataset. The dataset encapsulates various entities unique to the art domain. During the dataset creation process, we also adopt manual human evaluation, providing high-quality labels for our dataset. We introduce an automated process that facilitates the generation of this art dataset, harnessing data from multiple sources (Artpedia, Wikidata and Wikimedia Commons) to ensure its reliability and comprehensiveness. Furthermore, our paper delineates best practices for the integration of art datasets, and presents a detailed performance analysis of general-domain entity linking systems, when applied to domain-specific datasets. Through our research, we aim to address the lack of datasets for NEL in the art domain, providing resources for the development of new, more nuanced, and contextually rich entity linking methods in the realm of art and cultural heritage.

Cite as

Alejandro Sierra-Múnera, Linh Le, Gianluca Demartini, and Ralf Krestel. MELArt: A Multimodal Entity Linking Dataset for Art. In Special Issue on Resources for Graph Data and Knowledge. Transactions on Graph Data and Knowledge (TGDK), Volume 2, Issue 2, pp. 8:1-8:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{sierramunera_et_al:TGDK.2.2.8,
  author =	{Sierra-M\'{u}nera, Alejandro and Le, Linh and Demartini, Gianluca and Krestel, Ralf},
  title =	{{MELArt: A Multimodal Entity Linking Dataset for Art}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{8:1--8:22},
  ISSN =	{2942-7517},
  year =	{2024},
  volume =	{2},
  number =	{2},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.2.2.8},
  URN =		{urn:nbn:de:0030-drops-225921},
  doi =		{10.4230/TGDK.2.2.8},
  annote =	{Keywords: A Multimodal Entity Linking Dataset, Named Entity Linking, Art Domain, Wikidata, Wikimedia, Artpedia}
}
Document
Strong Faithfulness for ELH Ontology Embeddings

Authors: Victor Lacerda, Ana Ozaki, and Ricardo Guimarães

Published in: TGDK, Volume 2, Issue 3 (2024). Transactions on Graph Data and Knowledge, Volume 2, Issue 3


Abstract
Ontology embedding methods are powerful approaches to represent and reason over structured knowledge in various domains. One advantage of ontology embeddings over knowledge graph embeddings is their ability to capture and impose an underlying schema to which the model must conform. Despite advances, most current approaches do not guarantee that the resulting embedding respects the axioms the ontology entails. In this work, we formally prove that normalized ELH has the strong faithfulness property on convex geometric models, which means that there is an embedding that precisely captures the original ontology. We present a region-based geometric model for embedding normalized ELH ontologies into a continuous vector space. To prove strong faithfulness, our construction takes advantage of the fact that normalized ELH has a finite canonical model. We first prove the statement assuming (possibly) non-convex regions, allowing us to keep the required dimensions low. Then, we impose convexity on the regions and show the property still holds. Finally, we consider reasoning tasks on geometric models and analyze the complexity in the class of convex geometric models used for proving strong faithfulness.

Cite as

Victor Lacerda, Ana Ozaki, and Ricardo Guimarães. Strong Faithfulness for ELH Ontology Embeddings. In Transactions on Graph Data and Knowledge (TGDK), Volume 2, Issue 3, pp. 2:1-2:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{lacerda_et_al:TGDK.2.3.2,
  author =	{Lacerda, Victor and Ozaki, Ana and Guimar\~{a}es, Ricardo},
  title =	{{Strong Faithfulness for ELH Ontology Embeddings}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{2:1--2:29},
  ISSN =	{2942-7517},
  year =	{2024},
  volume =	{2},
  number =	{3},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.2.3.2},
  URN =		{urn:nbn:de:0030-drops-225965},
  doi =		{10.4230/TGDK.2.3.2},
  annote =	{Keywords: Knowledge Graph Embeddings, Ontologies, Description Logic}
}
Document
Resource Paper
Horned-OWL: Flying Further and Faster with Ontologies

Authors: Phillip Lord, Björn Gehrke, Martin Larralde, Janna Hastings, Filippo De Bortoli, James A. Overton, James P. Balhoff, and Jennifer Warrender

Published in: TGDK, Volume 2, Issue 2 (2024): Special Issue on Resources for Graph Data and Knowledge. Transactions on Graph Data and Knowledge, Volume 2, Issue 2


Abstract
Horned-OWL is a library implementing the OWL2 specification in the Rust language. As a library, it is aimed at processes and manipulation of ontologies, rather than supporting GUI development; this is reflected heavily in its design, which is for performance and pluggability; it builds on the Rust idiom, treating an ontology as a standard Rust collection, meaning it can take direct advantage of the data manipulation capabilities of the Rust standard library. The core library consists of a data model implementation as well as an IO framework supporting many common formats for OWL: RDF, XML and the OWL functional syntax; there is an extensive test library to ensure compliance to the specification. In addition to the core library, Horned-OWL now supports a growing ecosystem: the py-horned-owl library provides a Python front-end for Horned-OWL, ideal for scripting ontology manipulation; whelk-rs provides reasoning services; and horned-bin provides a number of command line tools. The library itself is now mature, supporting the entire OWL2 specification, in addition to SWRL rules, and the ecosystem is emerging into one of the most extensive for manipulation of OWL ontologies.

Cite as

Phillip Lord, Björn Gehrke, Martin Larralde, Janna Hastings, Filippo De Bortoli, James A. Overton, James P. Balhoff, and Jennifer Warrender. Horned-OWL: Flying Further and Faster with Ontologies. In Special Issue on Resources for Graph Data and Knowledge. Transactions on Graph Data and Knowledge (TGDK), Volume 2, Issue 2, pp. 9:1-9:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{lord_et_al:TGDK.2.2.9,
  author =	{Lord, Phillip and Gehrke, Bj\"{o}rn and Larralde, Martin and Hastings, Janna and De Bortoli, Filippo and Overton, James A. and Balhoff, James P. and Warrender, Jennifer},
  title =	{{Horned-OWL: Flying Further and Faster with Ontologies}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{9:1--9:14},
  ISSN =	{2942-7517},
  year =	{2024},
  volume =	{2},
  number =	{2},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.2.2.9},
  URN =		{urn:nbn:de:0030-drops-225932},
  doi =		{10.4230/TGDK.2.2.9},
  annote =	{Keywords: Web Ontology Language, OWL, Semantic Web}
}
Document
Resource Paper
TØIRoads: A Road Data Model Generation Tool

Authors: Grunde Haraldsson Wesenberg and Ana Ozaki

Published in: TGDK, Volume 2, Issue 2 (2024): Special Issue on Resources for Graph Data and Knowledge. Transactions on Graph Data and Knowledge, Volume 2, Issue 2


Abstract
We describe road data models which can represent high level features of a road network such as population, points of interest, and road length/cost and capacity, while abstracting from time and geographic location. Such abstraction allows for a simplified traffic usage and congestion analysis that focus on the high level features. We provide theoretical results regarding mass conservation and sufficient conditions for avoiding congestion within the model. We describe a road data model generation tool, which we call "TØI Roads". We also describe several parameters that can be specified by a TØI Roads user to create graph data that can serve as input for training graph neural networks (or another learning approach that receives graph data as input) for predicting congestion within the model. The road data model generation tool allows, for instance, the study of the effects of population growth and how changes in road capacity can mitigate traffic congestion.

Cite as

Grunde Haraldsson Wesenberg and Ana Ozaki. TØIRoads: A Road Data Model Generation Tool. In Special Issue on Resources for Graph Data and Knowledge. Transactions on Graph Data and Knowledge (TGDK), Volume 2, Issue 2, pp. 6:1-6:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{wesenberg_et_al:TGDK.2.2.6,
  author =	{Wesenberg, Grunde Haraldsson and Ozaki, Ana},
  title =	{{T{\O}IRoads: A Road Data Model Generation Tool}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{6:1--6:12},
  ISSN =	{2942-7517},
  year =	{2024},
  volume =	{2},
  number =	{2},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.2.2.6},
  URN =		{urn:nbn:de:0030-drops-225901},
  doi =		{10.4230/TGDK.2.2.6},
  annote =	{Keywords: Road Data, Transportation, Graph Neural Networks, Synthetic Dataset Generation}
}
  • Refine by Type
  • Document/HTML
  • 33 Document/PDF

  • Refine by Publication Year
  • 4 2025
  • 16 2024
  • 13 2023

  • Refine by Author
  • 3 Biswas, Russa
  • 3 Calbimonte, Jean-Paul
  • 3 Jiménez-Ruiz, Ernesto
  • 3 Lissandrini, Matteo
  • 3 Scherp, Ansgar
  • Show More...

  • Refine by Series/Journal
  • 33 TGDK

  • Refine by Classification
  • 11 Computing methodologies → Knowledge representation and reasoning
  • 8 Information systems → Graph-based database models
  • 6 Information systems → Semantic web description languages
  • 5 Computing methodologies → Ontology engineering
  • 3 Applied computing → Life and medical sciences
  • Show More...

  • Refine by Keyword
  • 5 Knowledge Graphs
  • 4 OWL
  • 3 Knowledge graphs
  • 2 Data
  • 2 Explainable AI
  • Show More...

Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail