Transactions on Graph Data and Knowledge, Volume 3, Issue 3

TGDK, Volume 3, Issue 3



Thumbnail PDF

Publication Details

  • published at: 2025-12-10
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Issue
TGDK, Volume 3, Issue 3, Complete Issue

Abstract
TGDK, Volume 3, Issue 3, Complete Issue

Cite as

Transactions on Graph Data and Knowledge (TGDK), Volume 3, Issue 3, pp. 1-144, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Article{TGDK.3.3,
  title =	{{TGDK, Volume 3, Issue 3, Complete Issue}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{1--144},
  ISSN =	{2942-7517},
  year =	{2025},
  volume =	{3},
  number =	{3},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.3.3},
  URN =		{urn:nbn:de:0030-drops-252642},
  doi =		{10.4230/TGDK.3.3},
  annote =	{Keywords: TGDK, Volume 3, Issue 3, Complete Issue}
}
Document
Front Matter
Front Matter, Table of Contents, List of Authors

Abstract
Front Matter, Table of Contents, List of Authors

Cite as

Transactions on Graph Data and Knowledge (TGDK), Volume 3, Issue 3, pp. 0:i-0:viii, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Article{TGDK.3.3.0,
  title =	{{Front Matter, Table of Contents, List of Authors}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{0:i--0:viii},
  ISSN =	{2942-7517},
  year =	{2025},
  volume =	{3},
  number =	{3},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.3.3.0},
  URN =		{urn:nbn:de:0030-drops-252632},
  doi =		{10.4230/TGDK.3.3.0},
  annote =	{Keywords: Front Matter, Table of Contents, List of Authors}
}
Document
Research
A Logic Programming Approach to Repairing SHACL Constraint Violations

Authors: Shqiponja Ahmetaj, Robert David, Axel Polleres, and Mantas Šimkus


Abstract
The Shapes Constraint Language (SHACL) is a recent standard, a W3C recommendation, for validating RDF graphs against shape constraints to be checked on target nodes of a data graph. The standard also describes the notion of validation reports, which detail the results of the validation process. In case of violation of constraints, the validation report should explain the reasons for non-validation, offering guidance on how to identify or fix violations in the data graph. Since the specification left it open to SHACL processors to define such explanations, a recent work proposed the use of explanations in the style of database repairs, where a repair is a set of additions to or deletions from the data graph so that the resulting graph validates against the constraints. In this paper, we study such repairs for non-recursive SHACL, the largest fragment of SHACL that is fully defined in the specification. We propose an algorithm to compute repairs by encoding the explanation problem - using Answer Set Programming (ASP) - into a logic program, where the answer sets contain (minimal) repairs. We then study a scenario where it is not possible to simultaneously repair all the targets, which may be the case due to overall unsatisfiability or conflicting constraints. We introduce a relaxed notion of validation, which allows to validate a (maximal) subset of the targets and adapt the ASP translation to take into account this relaxation. Finally, we add support for repairing constraints which use property paths and equality of paths. Our implementation in clingo is - to the best of our knowledge - the first implementation of a repair program for SHACL.

Cite as

Shqiponja Ahmetaj, Robert David, Axel Polleres, and Mantas Šimkus. A Logic Programming Approach to Repairing SHACL Constraint Violations. In Transactions on Graph Data and Knowledge (TGDK), Volume 3, Issue 3, pp. 1:1-1:36, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Article{ahmetaj_et_al:TGDK.3.3.1,
  author =	{Ahmetaj, Shqiponja and David, Robert and Polleres, Axel and \v{S}imkus, Mantas},
  title =	{{A Logic Programming Approach to Repairing SHACL Constraint Violations}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{1:1--1:36},
  ISSN =	{2942-7517},
  year =	{2025},
  volume =	{3},
  number =	{3},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.3.3.1},
  URN =		{urn:nbn:de:0030-drops-252124},
  doi =		{10.4230/TGDK.3.3.1},
  annote =	{Keywords: SHACL, Shapes Constraint Language, Database Repairs, Knowledge Graphs, Semantic Web, Answer Set Programming}
}
Document
Use Case
Automating Invoice Validation with Knowledge Graphs: Optimizations and Practical Lessons

Authors: Johannes Mäkelburg and Maribel Acosta


Abstract
To increase the efficiency of creating, distributing, and processing of invoices, invoicing is handled in the form of Electronic Data Interchange (EDI). With EDI, invoices are handled in a standardized electronic or digital format rather than on paper. While EDIFACT is widely used for electronic invoicing, there is no standardized approach for validating its content. In this work, we tackle the problem of automatically validating electronic invoices in the EDIFACT format by leveraging KG technologies. We build on a previously developed pipeline that transforms EDIFACT invoices into RDF knowledge graphs (KGs). The resulting graphs are validated using SHACL constraints defined in collaboration with domain experts. In this work, we improve the pipeline by enhancing the correctness of the invoice representation, reducing validation time, and introducing error prioritization through the use of the severity predicate in SHACL. These improvements make validation results easier to interpret and significantly reduce the manual effort required. Our evaluation confirms that the approach is correct, efficient, and practical for real-world use.

Cite as

Johannes Mäkelburg and Maribel Acosta. Automating Invoice Validation with Knowledge Graphs: Optimizations and Practical Lessons. In Transactions on Graph Data and Knowledge (TGDK), Volume 3, Issue 3, pp. 2:1-2:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Article{makelburg_et_al:TGDK.3.3.2,
  author =	{M\"{a}kelburg, Johannes and Acosta, Maribel},
  title =	{{Automating Invoice Validation with Knowledge Graphs: Optimizations and Practical Lessons}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{2:1--2:24},
  ISSN =	{2942-7517},
  year =	{2025},
  volume =	{3},
  number =	{3},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.3.3.2},
  URN =		{urn:nbn:de:0030-drops-252137},
  doi =		{10.4230/TGDK.3.3.2},
  annote =	{Keywords: Electronic Invoice, Ontology, EDIFACT, RDF, RML, SHACL}
}
Document
Resource
Supporting Psychometric Instrument Usage Through the POEM Ontology

Authors: Kelsey Rook, Henrique Santos, Deborah L. McGuinness, Manuel S. Sprung, Paulo Pinheiro, and Bruce F. Chorpita


Abstract
Psychometrics is the field relating to the measurement of concepts within psychology, particularly the assessment of various social and psychological dimensions in humans. The relationship between psychometric entities is critical to finding an appropriate assessment instrument, especially in the context of clinical psychology and mental healthcare in which providing the best care based on empirical evidence is crucial. We aim to model these entities, which include psychometric questionnaires and their component elements, the subject and respondent, and the latent variables being assessed. The current standard for questionnaire-based assessment relies on text-based distributions of instruments; so, a structured representation is necessary to capture these relationships to enhance accessibility and use of existing measures, encourage reuse of questionnaires and their component elements, and enable sophisticated reasoning over assessment instruments and results by increasing interoperability. We present the design process and architecture of such a domain ontology, the Psychometric Ontology of Experiences and Measures, situating it within the context of related ontologies, and demonstrating its practical utility through evaluation against a series of competency questions concerning the creation, use, and reuse of psychometric questionnaires in clinical, research, and development settings.

Cite as

Kelsey Rook, Henrique Santos, Deborah L. McGuinness, Manuel S. Sprung, Paulo Pinheiro, and Bruce F. Chorpita. Supporting Psychometric Instrument Usage Through the POEM Ontology. In Transactions on Graph Data and Knowledge (TGDK), Volume 3, Issue 3, pp. 3:1-3:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Article{rook_et_al:TGDK.3.3.3,
  author =	{Rook, Kelsey and Santos, Henrique and McGuinness, Deborah L. and Sprung, Manuel S. and Pinheiro, Paulo and Chorpita, Bruce F.},
  title =	{{Supporting Psychometric Instrument Usage Through the POEM Ontology}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{3:1--3:19},
  ISSN =	{2942-7517},
  year =	{2025},
  volume =	{3},
  number =	{3},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.3.3.3},
  URN =		{urn:nbn:de:0030-drops-252148},
  doi =		{10.4230/TGDK.3.3.3},
  annote =	{Keywords: ontology, ontology development, psychometric assessment, psychometric ontology}
}
Document
Research
Mining Inter-Document Argument Structures in Scientific Papers for an Argument Web

Authors: Florian Ruosch, Cristina Sarasua, and Abraham Bernstein


Abstract
In Argument Mining, predicting argumentative relations between texts (or spans) remains one of the most challenging aspects, even more so in the cross-document setting. This paper makes three key contributions to advance research in this domain. We first extend an existing dataset, the Sci-Arg corpus, by annotating it with explicit inter-document argumentative relations, thereby allowing arguments to be distributed over several documents forming an Argument Web; these new annotations are published using Semantic Web technologies (RDF, OWL). Second, we explore and evaluate three automated approaches for predicting these inter-document argumentative relations, establishing critical baselines on the new dataset. We find that a simple classifier based on discourse indicators with access to context outperforms neural methods. Third, we conduct a comparative analysis of these approaches for both intra- and inter-document settings, identifying statistically significant differences in results that indicate the necessity of distinguishing between these two scenarios. Our findings highlight significant challenges in this complex domain and open crucial avenues for future research on the Argument Web of Science, particularly for those interested in leveraging Semantic Web technologies and knowledge graphs to understand scholarly discourse. With this, we provide the first stepping stones in the form of a benchmark dataset, three baseline methods, and an initial analysis for a systematic exploration of this field relevant to the Web of Data and Science.

Cite as

Florian Ruosch, Cristina Sarasua, and Abraham Bernstein. Mining Inter-Document Argument Structures in Scientific Papers for an Argument Web. In Transactions on Graph Data and Knowledge (TGDK), Volume 3, Issue 3, pp. 4:1-4:33, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Article{ruosch_et_al:TGDK.3.3.4,
  author =	{Ruosch, Florian and Sarasua, Cristina and Bernstein, Abraham},
  title =	{{Mining Inter-Document Argument Structures in Scientific Papers for an Argument Web}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{4:1--4:33},
  ISSN =	{2942-7517},
  year =	{2025},
  volume =	{3},
  number =	{3},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.3.3.4},
  URN =		{urn:nbn:de:0030-drops-252159},
  doi =		{10.4230/TGDK.3.3.4},
  annote =	{Keywords: Argument Mining, Large Language Models, Knowledge Graphs, Link Prediction}
}
Document
Use Case
LLM-Supported Manufacturing Mapping Generation

Authors: Wilma Johanna Schmidt, Irlan Grangel-González, Adrian Paschke, and Evgeny Kharlamov


Abstract
In large manufacturing companies, such as Bosch, that operate thousands of production lines with each comprising up to dozens of production machines and other equipment, even simple inventory questions such as of location and quantities of a particular equipment type require non-trivial solutions. Addressing these questions requires to integrate multiple heterogeneous data sets which is time consuming and error prone and demands domain as well as knowledge experts. Knowledge graphs (KGs) are practical for consolidating inventory data by bringing it into the same format and linking inventory items. However, the KG creation and maintenance itself pose challenges as mappings are needed to connect data sets and ontologies. In this work, we address these challenges by exploring LLM-supported and context-enhanced generation of both YARRRML and RML mappings. Facing large ontologies in the manufacturing domain and token limitations in LLM prompts, we further evaluate ontology reduction methods in our approach. We evaluate our approach both quantitatively against reference mappings created manually by experts and, for YARRRML, also qualitatively with expert feedback. This work extends the exploration of the challenges with LLM-supported and context-enhanced mapping generation YARRRML [Schmidt et al., 2025] by comprehensive analyses on RML mappings and an ontology reduction evaluation. We further publish the source code of this work. Our work provides a valuable support when creating manufacturing mappings and supports data and schema updates.

Cite as

Wilma Johanna Schmidt, Irlan Grangel-González, Adrian Paschke, and Evgeny Kharlamov. LLM-Supported Manufacturing Mapping Generation. In Transactions on Graph Data and Knowledge (TGDK), Volume 3, Issue 3, pp. 5:1-5:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Article{schmidt_et_al:TGDK.3.3.5,
  author =	{Schmidt, Wilma Johanna and Grangel-Gonz\'{a}lez, Irlan and Paschke, Adrian and Kharlamov, Evgeny},
  title =	{{LLM-Supported Manufacturing Mapping Generation}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{5:1--5:22},
  ISSN =	{2942-7517},
  year =	{2025},
  volume =	{3},
  number =	{3},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.3.3.5},
  URN =		{urn:nbn:de:0030-drops-252164},
  doi =		{10.4230/TGDK.3.3.5},
  annote =	{Keywords: Mapping Generation, Knowledge Graph Construction, Ontology Reduction, RML, YARRRML, LLM, Manufacturing}
}

Filters


Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail