5 Search Results for "van Erp, Marieke"


Document
Short Paper
Towards Formalizing Concept Drift and Its Variants: A Case Study Using Past COSIT Proceedings (Short Paper)

Authors: Meilin Shi, Krzysztof Janowicz, Zilong Liu, and Kitty Currier

Published in: LIPIcs, Volume 315, 16th International Conference on Spatial Information Theory (COSIT 2024)


Abstract
In the classic Philosophical Investigations, Ludwig Wittgenstein suggests that the meaning of words is rooted in their use in ordinary language, challenging the idea of fixed rules determining the meaning of words. Likewise, we believe that the meaning of keywords and concepts in academic papers is shaped by their usage within the articles and evolves as research progresses. For example, the terms natural hazards and natural disasters were once used interchangeably, but this is rarely the case today. When searching for archived documents, such as those related to disaster relief, choosing the appropriate keyword is crucial and requires a deeper understanding of the historical context. To improve interoperability and promote reusability from a Research Data Management (RDM) perspective, we examine the dynamic nature of concepts, providing formal definitions of concept drift and its variants. By employing a case study of past COSIT (Conference on Spatial Information Theory) proceedings to support these definitions, we argue that a quantitative formalization can help systematically detect subsequent changes and enhance the overall interpretation of concepts.

Cite as

Meilin Shi, Krzysztof Janowicz, Zilong Liu, and Kitty Currier. Towards Formalizing Concept Drift and Its Variants: A Case Study Using Past COSIT Proceedings (Short Paper). In 16th International Conference on Spatial Information Theory (COSIT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 315, pp. 23:1-23:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{shi_et_al:LIPIcs.COSIT.2024.23,
  author =	{Shi, Meilin and Janowicz, Krzysztof and Liu, Zilong and Currier, Kitty},
  title =	{{Towards Formalizing Concept Drift and Its Variants: A Case Study Using Past COSIT Proceedings}},
  booktitle =	{16th International Conference on Spatial Information Theory (COSIT 2024)},
  pages =	{23:1--23:8},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-330-0},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{315},
  editor =	{Adams, Benjamin and Griffin, Amy L. and Scheider, Simon and McKenzie, Grant},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.COSIT.2024.23},
  URN =		{urn:nbn:de:0030-drops-208386},
  doi =		{10.4230/LIPIcs.COSIT.2024.23},
  annote =	{Keywords: Concept Drift, Semantic Aging, Research Data Management}
}
Document
Position
Standardizing Knowledge Engineering Practices with a Reference Architecture

Authors: Bradley P. Allen and Filip Ilievski

Published in: TGDK, Volume 2, Issue 1 (2024): Special Issue on Trends in Graph Data and Knowledge - Part 2. Transactions on Graph Data and Knowledge, Volume 2, Issue 1


Abstract
Knowledge engineering is the process of creating and maintaining knowledge-producing systems. Throughout the history of computer science and AI, knowledge engineering workflows have been widely used given the importance of high-quality knowledge for reliable intelligent agents. Meanwhile, the scope of knowledge engineering, as apparent from its target tasks and use cases, has been shifting, together with its paradigms such as expert systems, semantic web, and language modeling. The intended use cases and supported user requirements between these paradigms have not been analyzed globally, as new paradigms often satisfy prior pain points while possibly introducing new ones. The recent abstraction of systemic patterns into a boxology provides an opening for aligning the requirements and use cases of knowledge engineering with the systems, components, and software that can satisfy them best, however, this direction has not been explored to date. This paper proposes a vision of harmonizing the best practices in the field of knowledge engineering by leveraging the software engineering methodology of creating reference architectures. We describe how a reference architecture can be iteratively designed and implemented to associate user needs with recurring systemic patterns, building on top of existing knowledge engineering workflows and boxologies. We provide a six-step roadmap that can enable the development of such an architecture, consisting of scope definition, selection of information sources, architectural analysis, synthesis of an architecture based on the information source analysis, evaluation through instantiation, and, ultimately, instantiation into a concrete software architecture. We provide an initial design and outcome of the definition of architectural scope, selection of information sources, and analysis. As the remaining steps of design, evaluation, and instantiation of the architecture are largely use-case specific, we provide a detailed description of their procedures and point to relevant examples. We expect that following through on this vision will lead to well-grounded reference architectures for knowledge engineering, will advance the ongoing initiatives of organizing the neurosymbolic knowledge engineering space, and will build new links to the software architectures and data science communities.

Cite as

Bradley P. Allen and Filip Ilievski. Standardizing Knowledge Engineering Practices with a Reference Architecture. In Special Issue on Trends in Graph Data and Knowledge - Part 2. Transactions on Graph Data and Knowledge (TGDK), Volume 2, Issue 1, pp. 5:1-5:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{allen_et_al:TGDK.2.1.5,
  author =	{Allen, Bradley P. and Ilievski, Filip},
  title =	{{Standardizing Knowledge Engineering Practices with a Reference Architecture}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{5:1--5:23},
  ISSN =	{2942-7517},
  year =	{2024},
  volume =	{2},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.2.1.5},
  URN =		{urn:nbn:de:0030-drops-198623},
  doi =		{10.4230/TGDK.2.1.5},
  annote =	{Keywords: knowledge engineering, knowledge graphs, quality attributes, software architectures, sociotechnical systems}
}
Document
Knowledge Graphs and their Role in the Knowledge Engineering of the 21st Century (Dagstuhl Seminar 22372)

Authors: Paul Groth, Elena Simperl, Marieke van Erp, and Denny Vrandečić

Published in: Dagstuhl Reports, Volume 12, Issue 9 (2023)


Abstract
This report documents the programme and outcomes of Dagstuhl Seminar 22372 "Knowledge Graphs and their Role in the Knowledge Engineering of the 21st Century" held in September 2022. The seminar aimed to gain a better understanding of the way knowledge graphs are created, maintained, and used today, and identify research challenges throughout the knowledge engineering life cycle, including tasks such as modelling, representation, reasoning, and evolution. The participants identified directions of research to answer these challenges, which will form the basis for new methodologies, methods, and tools, applicable to varied AI systems in which knowledge graphs are used, for instance, in natural language processing, or in information retrieval. The seminar brought together a snapshot of the knowledge engineering and adjacent communities, including leading experts, academics, practitioners, and rising stars in those fields. It fulfilled its aims - the participants took inventory of existing and emerging solutions, discussed open problems and practical challenges, and identified ample opportunities for novel research, technology transfer, and inter-disciplinary collaborations. Among the topics of discussion were: designing engineering methodologies for knowledge graphs, integrating large language models and structured data into knowledge engineering pipelines, neural methods for knowledge engineering, responsible use of AI in knowledge graph construction, other forms of knowledge representations, and generating user and developer buy-in. Besides a range of joint publications, hackathons, and project proposals, the participants suggested joint activities with other scientific communities, in particular those working on large language models, generative AI, FAccT (fairness, accountability, transparency), and human-AI interaction. The discussions were captured in visual summaries thanks to Catherine Allan - you can find more about her work at https://www.catherineallan.co.uk/. The summaries are arrayed throughout this report. Lastly, knowledge about the seminar is captured in Wikidata at https://www.wikidata.org/wiki/Q113961931

Cite as

Paul Groth, Elena Simperl, Marieke van Erp, and Denny Vrandečić. Knowledge Graphs and their Role in the Knowledge Engineering of the 21st Century (Dagstuhl Seminar 22372). In Dagstuhl Reports, Volume 12, Issue 9, pp. 60-120, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@Article{groth_et_al:DagRep.12.9.60,
  author =	{Groth, Paul and Simperl, Elena and van Erp, Marieke and Vrande\v{c}i\'{c}, Denny},
  title =	{{Knowledge Graphs and their Role in the Knowledge Engineering of the 21st Century (Dagstuhl Seminar 22372)}},
  pages =	{60--120},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2023},
  volume =	{12},
  number =	{9},
  editor =	{Groth, Paul and Simperl, Elena and van Erp, Marieke and Vrande\v{c}i\'{c}, Denny},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.12.9.60},
  URN =		{urn:nbn:de:0030-drops-178105},
  doi =		{10.4230/DagRep.12.9.60},
  annote =	{Keywords: Dagstuhl Seminar}
}
Document
Short Paper
A Proposal for a Two-Way Journey on Validating Locations in Unstructured and Structured Data

Authors: Ilkcan Keles, Omar Qawasmeh, Tabea Tietz, Ludovica Marinucci, Roberto Reda, and Marieke van Erp

Published in: OASIcs, Volume 70, 2nd Conference on Language, Data and Knowledge (LDK 2019)


Abstract
The Web of Data has grown explosively over the past few years, and as with any dataset, there are bound to be invalid statements in the data, as well as gaps. Natural Language Processing (NLP) is gaining interest to fill gaps in data by transforming (unstructured) text into structured data. However, there is currently a fundamental mismatch in approaches between Linked Data and NLP as the latter is often based on statistical methods, and the former on explicitly modelling knowledge. However, these fields can strengthen each other by joining forces. In this position paper, we argue that using linked data to validate the output of an NLP system, and using textual data to validate Linked Open Data (LOD) cloud statements is a promising research avenue. We illustrate our proposal with a proof of concept on a corpus of historical travel stories.

Cite as

Ilkcan Keles, Omar Qawasmeh, Tabea Tietz, Ludovica Marinucci, Roberto Reda, and Marieke van Erp. A Proposal for a Two-Way Journey on Validating Locations in Unstructured and Structured Data. In 2nd Conference on Language, Data and Knowledge (LDK 2019). Open Access Series in Informatics (OASIcs), Volume 70, pp. 13:1-13:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{keles_et_al:OASIcs.LDK.2019.13,
  author =	{Keles, Ilkcan and Qawasmeh, Omar and Tietz, Tabea and Marinucci, Ludovica and Reda, Roberto and van Erp, Marieke},
  title =	{{A Proposal for a Two-Way Journey on Validating Locations in Unstructured and Structured Data}},
  booktitle =	{2nd Conference on Language, Data and Knowledge (LDK 2019)},
  pages =	{13:1--13:8},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-105-4},
  ISSN =	{2190-6807},
  year =	{2019},
  volume =	{70},
  editor =	{Eskevich, Maria and de Melo, Gerard and F\"{a}th, Christian and McCrae, John P. and Buitelaar, Paul and Chiarcos, Christian and Klimek, Bettina and Dojchinovski, Milan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.LDK.2019.13},
  URN =		{urn:nbn:de:0030-drops-103778},
  doi =		{10.4230/OASIcs.LDK.2019.13},
  annote =	{Keywords: data validity, natural language processing, linked data}
}
Document
Finding Stories in 1,784,532 Events: Scaling Up Computational Models of Narrative

Authors: Marieke van Erp, Antske Fokkens, and Piek Vossen

Published in: OASIcs, Volume 41, 2014 Workshop on Computational Models of Narrative


Abstract
Information professionals face the challenge of making sense of an ever increasing amount of information. Storylines can provide a useful way to present relevant information because they reveal explanatory relations between events. In this position paper, we present and discuss the four main challenges that make it difficult to get to these stories and our first ideas on how to start resolving them.

Cite as

Marieke van Erp, Antske Fokkens, and Piek Vossen. Finding Stories in 1,784,532 Events: Scaling Up Computational Models of Narrative. In 2014 Workshop on Computational Models of Narrative. Open Access Series in Informatics (OASIcs), Volume 41, pp. 241-245, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@InProceedings{vanerp_et_al:OASIcs.CMN.2014.241,
  author =	{van Erp, Marieke and Fokkens, Antske and Vossen, Piek},
  title =	{{Finding Stories in 1,784,532 Events: Scaling Up Computational Models of Narrative}},
  booktitle =	{2014 Workshop on Computational Models of Narrative},
  pages =	{241--245},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-939897-71-2},
  ISSN =	{2190-6807},
  year =	{2014},
  volume =	{41},
  editor =	{Finlayson, Mark A. and Meister, Jan Christoph and Bruneau, Emile G.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.CMN.2014.241},
  URN =		{urn:nbn:de:0030-drops-46601},
  doi =		{10.4230/OASIcs.CMN.2014.241},
  annote =	{Keywords: big data, news, aggregation, story detection}
}
  • Refine by Author
  • 3 van Erp, Marieke
  • 1 Allen, Bradley P.
  • 1 Currier, Kitty
  • 1 Fokkens, Antske
  • 1 Groth, Paul
  • Show More...

  • Refine by Classification
  • 2 Computing methodologies → Knowledge representation and reasoning
  • 2 Computing methodologies → Natural language processing
  • 1 Computing methodologies → Information extraction
  • 1 Computing methodologies → Machine learning
  • 1 Computing methodologies → Ontology engineering
  • Show More...

  • Refine by Keyword
  • 1 Concept Drift
  • 1 Dagstuhl Seminar
  • 1 Research Data Management
  • 1 Semantic Aging
  • 1 aggregation
  • Show More...

  • Refine by Type
  • 5 document

  • Refine by Publication Year
  • 2 2024
  • 1 2014
  • 1 2019
  • 1 2023

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail