74 Search Results for "Sun, He"


Document
Research
On the Computational Cost of Knowledge Graph Embeddings

Authors: Victor Charpenay, Mansour Zoubeirou A Mayaki, and Antoine Zimmermann

Published in: TGDK, Volume 4, Issue 1 (2026). Transactions on Graph Data and Knowledge, Volume 4, Issue 1


Abstract
Over a decade, numerous Knowledge Graph Embedding (KGE) models have been designed and evaluated on reference datasets, always with increasing performance. In this paper, we re-evaluate these models with respect to their computational efficiency during training, by estimating the computational cost of the procedure expressed in floating-point operations. We design a cost model based on analytical expressions and apply it on a collection of 20 KGE models, representative of the state-of-the-art. We show that dimensionality or parameter efficiency, used in the literature to compare models with each other, are not suitable to evaluate the true cost of models. Through fixed-budget experiments, a novel approach to evaluate KGE models based on cost estimates, we re-assess the relative performance of model families compared to the state-of-the-art. Bilinear models such as ComplEx underperform with a low computational budget while hyperbolic linear models appear to offer no particular benefit compared to simpler Euclidian models, especially the MuRE model. Neural models, such as ConvE or CompGCN, achieve reasonable performance in the literature but their high computational cost appears unnecessary when compared with other models. The trade-off between efficiency and expressivity of both linear and neural models is to be further explored.

Cite as

Victor Charpenay, Mansour Zoubeirou A Mayaki, and Antoine Zimmermann. On the Computational Cost of Knowledge Graph Embeddings. In Transactions on Graph Data and Knowledge (TGDK), Volume 4, Issue 1, pp. 1:1-1:30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@Article{charpenay_et_al:TGDK.4.1.1,
  author =	{Charpenay, Victor and Zoubeirou A Mayaki, Mansour and Zimmermann, Antoine},
  title =	{{On the Computational Cost of Knowledge Graph Embeddings}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{1:1--1:30},
  ISSN =	{2942-7517},
  year =	{2026},
  volume =	{4},
  number =	{1},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.4.1.1},
  URN =		{urn:nbn:de:0030-drops-256863},
  doi =		{10.4230/TGDK.4.1.1},
  annote =	{Keywords: Knowledge Graph Embedding, Parameter Efficiency, Computational Budget, Green AI}
}
Document
Fully Dynamic Spectral Sparsification for Directed Hypergraphs

Authors: Sebastian Forster, Gramoz Goranci, and Ali Momeni

Published in: LIPIcs, Volume 364, 43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026)


Abstract
There has been a surge of interest in spectral hypergraph sparsification, a natural generalization of spectral sparsification for graphs. In this paper, we present a simple fully dynamic algorithm for maintaining spectral hypergraph sparsifiers of directed hypergraphs. Our algorithm achieves a near-optimal size of O(n² / ε ² log ⁷ m) and amortized update time of O(r² log ³ m), where n is the number of vertices, and m and r respectively upper bound the number of hyperedges and the rank of the hypergraph at any time. We also extend our approach to the parallel batch-dynamic setting, where a batch of any k hyperedge insertions or deletions can be processed with O(kr² log ³ m) amortized work and O(log ² m) depth. This constitutes the first spectral-based sparsification algorithm in this setting.

Cite as

Sebastian Forster, Gramoz Goranci, and Ali Momeni. Fully Dynamic Spectral Sparsification for Directed Hypergraphs. In 43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 364, pp. 38:1-38:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{forster_et_al:LIPIcs.STACS.2026.38,
  author =	{Forster, Sebastian and Goranci, Gramoz and Momeni, Ali},
  title =	{{Fully Dynamic Spectral Sparsification for Directed Hypergraphs}},
  booktitle =	{43rd International Symposium on Theoretical Aspects of Computer Science (STACS 2026)},
  pages =	{38:1--38:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-412-3},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{364},
  editor =	{Mahajan, Meena and Manea, Florin and McIver, Annabelle and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2026.38},
  URN =		{urn:nbn:de:0030-drops-255272},
  doi =		{10.4230/LIPIcs.STACS.2026.38},
  annote =	{Keywords: Spectral sparsification, Dynamic algorithms, (Directed) hypergraphs, Data structures}
}
Document
Diffie-Hellman Key Exchange from Commutativity to Group Laws

Authors: Dung Hoang Duong, Youming Qiao, and Chuanqi Zhang

Published in: LIPIcs, Volume 362, 17th Innovations in Theoretical Computer Science Conference (ITCS 2026)


Abstract
In Diffie-Hellman key exchange, the commutativity of power operations is instrumental in the agreement of keys. Viewing commutativity as a law in abelian groups, we propose Diffie-Hellman key exchange in the group action framework (Brassard-Yung, Crypto'90; Ji-Qiao-Song-Yun, TCC'19), for actions of non-abelian groups with laws. The security of this protocol is shown, following Fischlin, Günther, Schmidt, and Warinschi (IEEE S&P'16), based on a pseudorandom group action assumption. A concrete instantiation is proposed based on the monomial code equivalence problem.

Cite as

Dung Hoang Duong, Youming Qiao, and Chuanqi Zhang. Diffie-Hellman Key Exchange from Commutativity to Group Laws. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 52:1-52:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{duong_et_al:LIPIcs.ITCS.2026.52,
  author =	{Duong, Dung Hoang and Qiao, Youming and Zhang, Chuanqi},
  title =	{{Diffie-Hellman Key Exchange from Commutativity to Group Laws}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{52:1--52:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.52},
  URN =		{urn:nbn:de:0030-drops-253396},
  doi =		{10.4230/LIPIcs.ITCS.2026.52},
  annote =	{Keywords: Diffie-Hellman, Key Exchange, Group Laws, Group Actions, Code Equivalence}
}
Document
On Approximating the f-Divergence Between Two Ising Models

Authors: Weiming Feng and Yucheng Fu

Published in: LIPIcs, Volume 362, 17th Innovations in Theoretical Computer Science Conference (ITCS 2026)


Abstract
The f-divergence is a fundamental notion that measures the difference between two distributions. In this paper, we study the problem of approximating the f-divergence between two Ising models, which is a generalization of recent work on approximating the TV-distance. Given two Ising models ν and μ, which are specified by their interaction matrices and external fields, the problem is to approximate the f-divergence D_f (ν ‖ μ) within an arbitrary relative error e^{±ε}. For χ^α-divergence with a constant integer α, we establish both algorithmic and hardness results. The algorithm works in a parameter regime that matches the hardness result. Our algorithm can be extended to other f-divergences such as α-divergence, Kullback-Leibler divergence, Rényi divergence, Jensen-Shannon divergence, and squared Hellinger distance.

Cite as

Weiming Feng and Yucheng Fu. On Approximating the f-Divergence Between Two Ising Models. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 59:1-59:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{feng_et_al:LIPIcs.ITCS.2026.59,
  author =	{Feng, Weiming and Fu, Yucheng},
  title =	{{On Approximating the f-Divergence Between Two Ising Models}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{59:1--59:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.59},
  URN =		{urn:nbn:de:0030-drops-253469},
  doi =		{10.4230/LIPIcs.ITCS.2026.59},
  annote =	{Keywords: Ising model, f-divergence, approximation algorithms, randomized algorithms}
}
Document
Perfect Simulation of Las Vegas Algorithms via Local Computation

Authors: Xinyu Fu, Yonggang Jiang, and Yitong Yin

Published in: LIPIcs, Volume 362, 17th Innovations in Theoretical Computer Science Conference (ITCS 2026)


Abstract
The notion of Las Vegas algorithms was introduced by Babai (1979) and can be defined in two ways: - In Babai’s original definition, a randomized algorithm is called Las Vegas if it has a finitely bounded running time and certifiable random failure. - Another definition widely accepted today is that Las Vegas algorithms refer to zero-error randomized algorithms with random running times. The equivalence between the two definitions is straightforward. Specifically, for randomized algorithms with certifiable failures, repeatedly running the algorithm until no failure is encountered allows for faithful simulation of the correct output when it executes successfully. We show that a similar perfect simulation can also be achieved in distributed local computation. Specifically, in the LOCAL model, with a polylogarithmic overhead in time complexity, any Las Vegas algorithm with finitely bounded running time and locally certifiable failures can be converted to a zero error Las Vegas algorithm. This transformed algorithm faithfully reproduces the correct output of the original algorithm in successful executions. This is achieved by a reduction to a distributed sampling problem under the Lovász Local Lemma (LLL), where the objective is to sample from the joint distribution of random variables avoiding all bad events. We then design the first efficient algorithm to solve this sampling problem in the LOCAL model.

Cite as

Xinyu Fu, Yonggang Jiang, and Yitong Yin. Perfect Simulation of Las Vegas Algorithms via Local Computation. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 63:1-63:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{fu_et_al:LIPIcs.ITCS.2026.63,
  author =	{Fu, Xinyu and Jiang, Yonggang and Yin, Yitong},
  title =	{{Perfect Simulation of Las Vegas Algorithms via Local Computation}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{63:1--63:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.63},
  URN =		{urn:nbn:de:0030-drops-253503},
  doi =		{10.4230/LIPIcs.ITCS.2026.63},
  annote =	{Keywords: Las Vegas algorithms, perfect simulation, Lov\'{a}sz Local Lemma, sampling}
}
Document
Range Longest Increasing Subsequence and Its Relatives

Authors: Karthik C. S. and Saladi Rahul

Published in: LIPIcs, Volume 362, 17th Innovations in Theoretical Computer Science Conference (ITCS 2026)


Abstract
Longest increasing subsequence (LIS) is a classical textbook problem which is still actively studied in various computational models. In this work, we present a few results for the range longest increasing subsequence problem (Range-LIS) and its variants. The input to Range-LIS is a sequence 𝒮 of n real numbers and a collection 𝒬 of m query ranges and for each query in 𝒬, the goal is to report the LIS of the sequence 𝒮 restricted to that query. Our two main results are for the following generalizations of the Range-LIS problem: 2D Range Queries: In this variant of the Range-LIS problem, each query is a pair of ranges, one of indices and the other of values, and we provide a randomized algorithm with running time Õ(mn^{1/2}+ n^{3/2})+O(k), where k is the cumulative length of the m output subsequences. This improves on the elementary Õ(mn) runtime algorithm when m = Ω(√n). Previously, the only known result breaking the quadratic barrier was of Tiskin [SODA'10] which could only handle 1D range queries (i.e., each query was a range of indices) and also just outputted the length of the LIS (instead of reporting the subsequence achieving that length). Subsequent to our paper, Gawrychowski, Gorbachev, and Kociumaka in a preprint have extended Tiskin’s approach to handle reporting 1D range queries in O(n(log n)³+m+k) time. Colored Sequences: In this variant of the Range-LIS problem, each element in 𝒮 is colored and for each query in 𝒬, the goal is to report a monochromatic LIS contained in the sequence 𝒮 restricted to that query. For 2D queries, we provide a randomized algorithm for this colored version with running time Õ(mn^{2/3}+ n^{5/3})+O(k). Moreover, for 1D queries, we provide an improved algorithm with running time Õ(mn^{1/2}+ n^{3/2})+O(k). Thus, we again improve on the elementary Õ(mn) runtime algorithm. Additionally, we prove that assuming the well-known Combinatorial Boolean Matrix Multiplication Hypothesis, that the runtime for 1D queries is essentially tight for combinatorial algorithms. Our algorithms combine several tools such as dynamic programming (to precompute increasing subsequences with some desirable properties), geometric data structures (to efficiently compute the dynamic programming entries), random sampling (to capture elements which are part of the LIS), classification of query ranges into large LIS and small LIS, and classification of colors into light and heavy. We believe that our techniques will be of interest to tackle other variants of LIS problem and other range-searching problems.

Cite as

Karthik C. S. and Saladi Rahul. Range Longest Increasing Subsequence and Its Relatives. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 87:1-87:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{karthikc.s._et_al:LIPIcs.ITCS.2026.87,
  author =	{Karthik C. S. and Rahul, Saladi},
  title =	{{Range Longest Increasing Subsequence and Its Relatives}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{87:1--87:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.87},
  URN =		{urn:nbn:de:0030-drops-253740},
  doi =		{10.4230/LIPIcs.ITCS.2026.87},
  annote =	{Keywords: Longest Increasing Subsequence, Range Query, Fine-Grained Complexity}
}
Document
Research
Mining Inter-Document Argument Structures in Scientific Papers for an Argument Web

Authors: Florian Ruosch, Cristina Sarasua, and Abraham Bernstein

Published in: TGDK, Volume 3, Issue 3 (2025). Transactions on Graph Data and Knowledge, Volume 3, Issue 3


Abstract
In Argument Mining, predicting argumentative relations between texts (or spans) remains one of the most challenging aspects, even more so in the cross-document setting. This paper makes three key contributions to advance research in this domain. We first extend an existing dataset, the Sci-Arg corpus, by annotating it with explicit inter-document argumentative relations, thereby allowing arguments to be distributed over several documents forming an Argument Web; these new annotations are published using Semantic Web technologies (RDF, OWL). Second, we explore and evaluate three automated approaches for predicting these inter-document argumentative relations, establishing critical baselines on the new dataset. We find that a simple classifier based on discourse indicators with access to context outperforms neural methods. Third, we conduct a comparative analysis of these approaches for both intra- and inter-document settings, identifying statistically significant differences in results that indicate the necessity of distinguishing between these two scenarios. Our findings highlight significant challenges in this complex domain and open crucial avenues for future research on the Argument Web of Science, particularly for those interested in leveraging Semantic Web technologies and knowledge graphs to understand scholarly discourse. With this, we provide the first stepping stones in the form of a benchmark dataset, three baseline methods, and an initial analysis for a systematic exploration of this field relevant to the Web of Data and Science.

Cite as

Florian Ruosch, Cristina Sarasua, and Abraham Bernstein. Mining Inter-Document Argument Structures in Scientific Papers for an Argument Web. In Transactions on Graph Data and Knowledge (TGDK), Volume 3, Issue 3, pp. 4:1-4:33, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Article{ruosch_et_al:TGDK.3.3.4,
  author =	{Ruosch, Florian and Sarasua, Cristina and Bernstein, Abraham},
  title =	{{Mining Inter-Document Argument Structures in Scientific Papers for an Argument Web}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{4:1--4:33},
  ISSN =	{2942-7517},
  year =	{2025},
  volume =	{3},
  number =	{3},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.3.3.4},
  URN =		{urn:nbn:de:0030-drops-252159},
  doi =		{10.4230/TGDK.3.3.4},
  annote =	{Keywords: Argument Mining, Large Language Models, Knowledge Graphs, Link Prediction}
}
Document
Invited Paper
Rule-Based Knowledge Graph Completion (Invited Paper)

Authors: Patrick Betz, Christian Meilicke, and Heiner Stuckenschmidt

Published in: OASIcs, Volume 138, Joint Proceedings of the 20th and 21st Reasoning Web Summer Schools (RW 2024 & RW 2025)


Abstract
The field of knowledge graph completion is concerned with augmenting knowledge graphs with missing information. Symbolic rule-based approaches are not only efficient and interpretable but also competitive with embedding-based methods in regard to predictive quality. Rule-based knowledge graph completion can be separated into two stages, the learning stage and the application stage, which are both individually challenging. In the learning stage, horn rules are mined from a given knowledge graph. Given the vast size of the space of all possible rules, the mining approach must select relevant rules effectively. In the application stage, the mined rules are used to make new predictions which are assigned with plausibility scores. These scores need to be set by aggregating individual confidence values of rules that have the same consequence. This tutorial covers the fundamental aspects required to build a symbolic rule-based approach for knowledge graph completion. It will discuss the different rule types, mining strategies, and how to effectively apply the rules in different scenarios. Finally, we discuss practical examples for rule application by using the Python-based PyClause library.

Cite as

Patrick Betz, Christian Meilicke, and Heiner Stuckenschmidt. Rule-Based Knowledge Graph Completion (Invited Paper). In Joint Proceedings of the 20th and 21st Reasoning Web Summer Schools (RW 2024 & RW 2025). Open Access Series in Informatics (OASIcs), Volume 138, pp. 1:1-1:45, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{betz_et_al:OASIcs.RW.2024/2025.1,
  author =	{Betz, Patrick and Meilicke, Christian and Stuckenschmidt, Heiner},
  title =	{{Rule-Based Knowledge Graph Completion}},
  booktitle =	{Joint Proceedings of the 20th and 21st Reasoning Web Summer Schools (RW 2024 \& RW 2025)},
  pages =	{1:1--1:45},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-405-5},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{138},
  editor =	{Artale, Alessandro and Bienvenu, Meghyn and Garc{\'\i}a, Yazm{\'\i}n Ib\'{a}\~{n}ez and Murlak, Filip},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.RW.2024/2025.1},
  URN =		{urn:nbn:de:0030-drops-250461},
  doi =		{10.4230/OASIcs.RW.2024/2025.1},
  annote =	{Keywords: Knowledge Graph Completion, Rule Learning, Symbolic AI}
}
Document
PhD Panel
Unsupervised Multimodal Learning for Fault Diagnosis and Prognosis - Application to Radiotherapy Systems (PhD Panel)

Authors: Kélian Poujade, Louise Travé-Massuyès, Jérémy Pirard, and Laure Vieillevigne

Published in: OASIcs, Volume 136, 36th International Conference on Principles of Diagnosis and Resilient Systems (DX 2025)


Abstract
Modern complex systems, such as radiotherapy machines, require robust strategies for fault detection, diagnosis, and prognosis to ensure operational continuity and patient safety. While data-driven methods have gained traction, few studies address diagnostic and prognostic tasks using multimodal operational data under unsupervised or semi-supervised learning settings. This gap is particularly critical given the scarcity of labeled failure data in real-world environments. This work aims to design a unified approach for fault detection, diagnosis, and prognosis using multimodal data in the absence of complete labeling. To this end, autoencoders (AEs) are employed due to their suitability for unsupervised and self-supervised learning, flexibility in handling heterogeneous data, and ability to construct latent representations optimized for various downstream tasks. A specific implementation based on a Long Short-Term Memory β-Variational Autoencoder (LSTM-β-VAE) was developed to detect anomalies in machine logs. This framework is applied to TomoTherapy® systems - a highly complex and under-explored use case within the radiotherapy domain. Initial results demonstrate strong anomaly detection performance on both a public benchmark dataset (HDFS) and a proprietary dataset derived from real-world TomoTherapy® machine faults. Beyond methodology, the paper includes a concise literature review of multimodal learning and data-driven diagnosis and prognosis with a focus on AEs. Based on this review, key research directions are identified for the continuation of the thesis, especially the integration of explainable AI as a means to enhance diagnosis capabilities in the absence of labeled faults.

Cite as

Kélian Poujade, Louise Travé-Massuyès, Jérémy Pirard, and Laure Vieillevigne. Unsupervised Multimodal Learning for Fault Diagnosis and Prognosis - Application to Radiotherapy Systems (PhD Panel). In 36th International Conference on Principles of Diagnosis and Resilient Systems (DX 2025). Open Access Series in Informatics (OASIcs), Volume 136, pp. 16:1-16:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{poujade_et_al:OASIcs.DX.2025.16,
  author =	{Poujade, K\'{e}lian and Trav\'{e}-Massuy\`{e}s, Louise and Pirard, J\'{e}r\'{e}my and Vieillevigne, Laure},
  title =	{{Unsupervised Multimodal Learning for Fault Diagnosis and Prognosis - Application to Radiotherapy Systems}},
  booktitle =	{36th International Conference on Principles of Diagnosis and Resilient Systems (DX 2025)},
  pages =	{16:1--16:17},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-394-2},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{136},
  editor =	{Quinones-Grueiro, Marcos and Biswas, Gautam and Pill, Ingo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.DX.2025.16},
  URN =		{urn:nbn:de:0030-drops-248058},
  doi =		{10.4230/OASIcs.DX.2025.16},
  annote =	{Keywords: Artificial Intelligence, Diagnosis, Prognosis, Radiotherapy machines}
}
Document
Optimized Spectral Fault Receptive Fields for Diagnosis-Informed Prognosis

Authors: Stan Muñoz Gutiérrez and Franz Wotawa

Published in: OASIcs, Volume 136, 36th International Conference on Principles of Diagnosis and Resilient Systems (DX 2025)


Abstract
This paper introduces Spectral Fault Receptive Fields (SFRFs), a biologically inspired technique for degradation state assessment in bearing fault diagnosis and remaining useful life (RUL) estimation. Drawing on the center-surround organization of retinal ganglion cell receptive fields, we propose a frequency-domain feature extraction algorithm that enhances the detection of fault signatures in vibration signals. SFRFs are designed as antagonistic spectral filters centered on characteristic fault frequencies, with inhibitory surrounds that enable robust characterization of incipient faults under variable operating conditions. A multi-objective evolutionary optimization strategy based on NSGA-II algorithm is employed to tune the receptive field parameters by simultaneously minimizing RUL prediction error, maximizing feature monotonicity, and promoting smooth degradation trajectories. The method is demonstrated on the XJTU-SY bearing run-to-failure dataset, confirming its suitability for constructing condition indicators in health monitoring applications. Key contributions include: (i) the introduction of SFRFs, inspired by the biology of vision in the primate retina; (ii) an evolutionary optimization framework guided by condition monitoring and prognosis criteria; and (iii) experimental evidence supporting the detection of early-stage faults and their precursors. Furthermore, we confirm that our diagnosis-informed spectral representation achieves accurate RUL prediction using a bagging regressor. The results highlight the interpretability and principled design of SFRFs, bridging signal processing, biological sensing principles, and data-driven prognostics in rotating machinery.

Cite as

Stan Muñoz Gutiérrez and Franz Wotawa. Optimized Spectral Fault Receptive Fields for Diagnosis-Informed Prognosis. In 36th International Conference on Principles of Diagnosis and Resilient Systems (DX 2025). Open Access Series in Informatics (OASIcs), Volume 136, pp. 9:1-9:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{munozgutierrez_et_al:OASIcs.DX.2025.9,
  author =	{Mu\~{n}oz Guti\'{e}rrez, Stan and Wotawa, Franz},
  title =	{{Optimized Spectral Fault Receptive Fields for Diagnosis-Informed Prognosis}},
  booktitle =	{36th International Conference on Principles of Diagnosis and Resilient Systems (DX 2025)},
  pages =	{9:1--9:20},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-394-2},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{136},
  editor =	{Quinones-Grueiro, Marcos and Biswas, Gautam and Pill, Ingo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.DX.2025.9},
  URN =		{urn:nbn:de:0030-drops-247986},
  doi =		{10.4230/OASIcs.DX.2025.9},
  annote =	{Keywords: Health Perception, Spectral Fault Receptive Fields, Remaining Useful Life, Incipient Fault Diagnosis, Prognostics and Health Management, Condition Monitoring, Evolutionary Multi-Objective Optimization, Bagged Regression Tree Ensemble, Bearing Fault Diagnosis}
}
Document
Towards Constant Time Multi-Call Rumor Spreading on Small-Set Expanders

Authors: Emilio Cruciani, Sebastian Forster, and Tijn de Vos

Published in: LIPIcs, Volume 356, 39th International Symposium on Distributed Computing (DISC 2025)


Abstract
We study a multi-call variant of the classic PUSH&PULL rumor spreading process where nodes can contact k of their neighbors instead of a single one during both PUSH and PULL operations. We show that rumor spreading can be made faster at the cost of an increased amount of communication between the nodes. As a motivating example, consider the process on a complete graph of n nodes: while the standard PUSH&PULL protocol takes Θ(log n) rounds, we prove that our k-PUSH&PULL variant completes in Θ(log_{k} n) rounds, with high probability. We generalize this result in an expansion-sensitive way, as has been done for the classic PUSH&PULL protocol for different notions of expansion, e.g., conductance and vertex expansion. We consider small-set vertex expanders, graphs in which every sufficiently small subset of nodes has a large neighborhood, ensuring strong local connectivity. In particular, when the expansion parameter satisfies ϕ > 1, these graphs have a diameter of o(log n), as opposed to other standard notions of expansion. Since the graph’s diameter is a lower bound on the number of rounds required for rumor spreading, this makes small-set expanders particularly well-suited for fast information dissemination. We prove that k-PUSH&PULL takes O(log_{ϕ} n ⋅ log_{k} n) rounds in these expanders, with high probability. We complement this with a simple lower bound of Ω(log_{ϕ} n+ log_{k} n) rounds.

Cite as

Emilio Cruciani, Sebastian Forster, and Tijn de Vos. Towards Constant Time Multi-Call Rumor Spreading on Small-Set Expanders. In 39th International Symposium on Distributed Computing (DISC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 356, pp. 26:1-26:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{cruciani_et_al:LIPIcs.DISC.2025.26,
  author =	{Cruciani, Emilio and Forster, Sebastian and de Vos, Tijn},
  title =	{{Towards Constant Time Multi-Call Rumor Spreading on Small-Set Expanders}},
  booktitle =	{39th International Symposium on Distributed Computing (DISC 2025)},
  pages =	{26:1--26:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-402-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{356},
  editor =	{Kowalski, Dariusz R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2025.26},
  URN =		{urn:nbn:de:0030-drops-248434},
  doi =		{10.4230/LIPIcs.DISC.2025.26},
  annote =	{Keywords: small set expansion, vertex expansion, rumor spreading, multi-call rumor spreading, push\&pull protocol}
}
Document
PIPQ: Strict Insert-Optimized Concurrent Priority Queue

Authors: Olivia Grimes, Ahmed Hassan, Panagiota Fatourou, and Roberto Palmieri

Published in: LIPIcs, Volume 356, 39th International Symposium on Distributed Computing (DISC 2025)


Abstract
This paper presents PIPQ, a strict and linearizable concurrent priority queue whose design differs from existing solutions in literature because it focuses on enabling parallelism of insert operations as opposed to accelerating delete-min operations, as traditionally done. In a nutshell, PIPQ’s structure includes two levels: the worker level and the leader level. The worker level provides per-thread data structures enabling fast and parallel insertions. The leader level contains the highest priority elements in the priority queue and can thus serve delete-min operations. Our evaluation, which includes an exploration of different data access patterns, operation mixes, runtime settings, and an integration into a graph-based application, shows that PIPQ outperforms competitors in a variety of cases, especially with insert-dominant workloads.

Cite as

Olivia Grimes, Ahmed Hassan, Panagiota Fatourou, and Roberto Palmieri. PIPQ: Strict Insert-Optimized Concurrent Priority Queue. In 39th International Symposium on Distributed Computing (DISC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 356, pp. 35:1-35:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{grimes_et_al:LIPIcs.DISC.2025.35,
  author =	{Grimes, Olivia and Hassan, Ahmed and Fatourou, Panagiota and Palmieri, Roberto},
  title =	{{PIPQ: Strict Insert-Optimized Concurrent Priority Queue}},
  booktitle =	{39th International Symposium on Distributed Computing (DISC 2025)},
  pages =	{35:1--35:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-402-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{356},
  editor =	{Kowalski, Dariusz R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2025.35},
  URN =		{urn:nbn:de:0030-drops-248525},
  doi =		{10.4230/LIPIcs.DISC.2025.35},
  annote =	{Keywords: Priority Queue, Concurrent Data Structures, Synchronization}
}
Document
Survey
Resilience in Knowledge Graph Embeddings

Authors: Arnab Sharma, N'Dah Jean Kouagou, and Axel-Cyrille Ngonga Ngomo

Published in: TGDK, Volume 3, Issue 2 (2025). Transactions on Graph Data and Knowledge, Volume 3, Issue 2


Abstract
In recent years, knowledge graphs have gained interest and witnessed widespread applications in various domains, such as information retrieval, question-answering, recommendation systems, amongst others. Large-scale knowledge graphs to this end have demonstrated their utility in effectively representing structured knowledge. To further facilitate the application of machine learning techniques, knowledge graph embedding models have been developed. Such models can transform entities and relationships within knowledge graphs into vectors. However, these embedding models often face challenges related to noise, missing information, distribution shift, adversarial attacks, etc. This can lead to sub-optimal embeddings and incorrect inferences, thereby negatively impacting downstream applications. While the existing literature has focused so far on adversarial attacks on KGE models, the challenges related to the other critical aspects remain unexplored. In this paper, we, first of all, give a unified definition of resilience, encompassing several factors such as generalisation, in-distribution generalization, distribution adaption, and robustness. After formalizing these concepts for machine learning in general, we define them in the context of knowledge graphs. To find the gap in the existing works on resilience in the context of knowledge graphs, we perform a systematic survey, taking into account all these aspects mentioned previously. Our survey results show that most of the existing works focus on a specific aspect of resilience, namely robustness. After categorizing such works based on their respective aspects of resilience, we discuss the challenges and future research directions.

Cite as

Arnab Sharma, N'Dah Jean Kouagou, and Axel-Cyrille Ngonga Ngomo. Resilience in Knowledge Graph Embeddings. In Transactions on Graph Data and Knowledge (TGDK), Volume 3, Issue 2, pp. 1:1-1:38, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Article{sharma_et_al:TGDK.3.2.1,
  author =	{Sharma, Arnab and Kouagou, N'Dah Jean and Ngomo, Axel-Cyrille Ngonga},
  title =	{{Resilience in Knowledge Graph Embeddings}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{1:1--1:38},
  ISSN =	{2942-7517},
  year =	{2025},
  volume =	{3},
  number =	{2},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.3.2.1},
  URN =		{urn:nbn:de:0030-drops-248117},
  doi =		{10.4230/TGDK.3.2.1},
  annote =	{Keywords: Knowledge graphs, Resilience, Robustness}
}
Document
Research
GraphRAG on Technical Documents - Impact of Knowledge Graph Schema

Authors: Henri Scaffidi, Melinda Hodkiewicz, Caitlin Woods, and Nicole Roocke

Published in: TGDK, Volume 3, Issue 2 (2025). Transactions on Graph Data and Knowledge, Volume 3, Issue 2


Abstract
Retrieval Augmented Generation (RAG) is seeing rapid adoption in industry to enable employees to query information captured in proprietary data for their organisation. In this work, we test the impact of domain-relevant knowledge graph schemas on the results of Microsoft’s GraphRAG pipeline. Our approach aims to address the poor quality of GraphRAG responses on technical reports rich in domain-specific terms. The use case involves technical reports about geology, chemistry and mineral processing published by the Minerals Research Institute of Western Australia (MRIWA). Four schemas are considered: a simple five-class minerals domain expert-developed schema, an expanded minerals domain schema, the Microsoft GraphRAG auto-generated schema, and a schema-less GraphRAG. These are compared to a conventional baseline RAG. Performance is evaluated using a scoring approach that accounts for the mix of correct, incorrect, additional, and missing content in RAG responses. The results show that the simple five-class minerals domain schema extracts approximately 10% more entities from the MRIWA reports than the other schema options. Additionally, both the five-class and the expanded eight-class minerals domain schemas produce the most factually correct answers and the fewest hallucinations. We attribute this to the minerals-specific schemas extracting more relevant, domain-specific information during the Indexing stage. As a result, the Query stage’s context window includes more high-value content. This contributes to the observed improvement in answer quality compared to the other pipelines. In contrast, pipelines with fewer domain-related entities in the KG retrieve less valuable information, leaving more room for irrelevant content in the context window. Baseline RAG responses were typically shorter, less complete, and contained more hallucinations compared to our GraphRAG pipelines. We provide a complete set of resources at https://github.com/nlp-tlp/GraphRAG-on-Minerals-Domain/tree/main. These resources include links to the MRIWA reports, a set of questions (from simple to challenging) along with domain-expert curated answers, schemas, and evaluations of the pipelines.

Cite as

Henri Scaffidi, Melinda Hodkiewicz, Caitlin Woods, and Nicole Roocke. GraphRAG on Technical Documents - Impact of Knowledge Graph Schema. In Transactions on Graph Data and Knowledge (TGDK), Volume 3, Issue 2, pp. 3:1-3:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Article{scaffidi_et_al:TGDK.3.2.3,
  author =	{Scaffidi, Henri and Hodkiewicz, Melinda and Woods, Caitlin and Roocke, Nicole},
  title =	{{GraphRAG on Technical Documents - Impact of Knowledge Graph Schema}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{3:1--3:24},
  ISSN =	{2942-7517},
  year =	{2025},
  volume =	{3},
  number =	{2},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.3.2.3},
  URN =		{urn:nbn:de:0030-drops-248131},
  doi =		{10.4230/TGDK.3.2.3},
  annote =	{Keywords: RAG, minerals, local search, global search, entity extraction, competency questions}
}
Document
Temporal Ensemble Logic for Integrative Representation of the Entirety of Clinical Trials

Authors: Xiaojin Li, Yan Huang, Rashmie Abeysinghe, Zenan Sun, Hongyu Chen, Pengze Li, Xing He, Shiqiang Tao, Cui Tao, Jiang Bian, Licong Cui, and Guo-Qiang Zhang

Published in: LIPIcs, Volume 355, 32nd International Symposium on Temporal Representation and Reasoning (TIME 2025)


Abstract
Clinical trials are typically specified with protocols that define eligibility criteria, treatment regimens, follow-up schedules, and outcome assessments. Temporality is a hallmark of all clinical trials, reflected within and across trial components, with complex dependencies unfolding across multiple time points. Despite their importance, clinical trial protocols are described in free-text format, limiting their semantic precision and the ability to support automated reasoning, leverage data across studies and sites, or simulate trial execution under varying assumptions using Real-World Data. This paper introduces a formalized representation of clinical trials using Temporal Ensemble Logic (TEL). TEL incorporates metricized modal operators, such as "always until t" (□_t) and "possibly until t" (◇_t), where t is a time-length parameter, to offer a logical framework for capturing phenotypes in biomedicine. TEL is more expressive in syntax than classical linear temporal logic (LTL) while maintaining the simplicity of semantic structures. The attributes of TEL are exploited in this paper to formally represent not only individual clinical trial components, but also the timing and sequential dependencies of these components as a whole. Modeling strategies and demonstration case studies are provided to show that TEL can represent the entirety of clinical trials, whereby providing a formal logical framework that can be used to represent the intricate temporal dependencies in trial structure specification. Since clinical trials are a cornerstone of evidence-based medicine, serving as the scientific basis for evaluating the safety, efficacy, and comparative effectiveness of therapeutic interventions, results reported here can serve as a stepping stone that leads to scalable, consistent, and reproducible representation and simulation of clinical trials across all disease domains.

Cite as

Xiaojin Li, Yan Huang, Rashmie Abeysinghe, Zenan Sun, Hongyu Chen, Pengze Li, Xing He, Shiqiang Tao, Cui Tao, Jiang Bian, Licong Cui, and Guo-Qiang Zhang. Temporal Ensemble Logic for Integrative Representation of the Entirety of Clinical Trials. In 32nd International Symposium on Temporal Representation and Reasoning (TIME 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 355, pp. 13:1-13:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.TIME.2025.13,
  author =	{Li, Xiaojin and Huang, Yan and Abeysinghe, Rashmie and Sun, Zenan and Chen, Hongyu and Li, Pengze and He, Xing and Tao, Shiqiang and Tao, Cui and Bian, Jiang and Cui, Licong and Zhang, Guo-Qiang},
  title =	{{Temporal Ensemble Logic for Integrative Representation of the Entirety of Clinical Trials}},
  booktitle =	{32nd International Symposium on Temporal Representation and Reasoning (TIME 2025)},
  pages =	{13:1--13:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-401-7},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{355},
  editor =	{Vidal, Thierry and Wa{\l}\k{e}ga, Przemys{\l}aw Andrzej},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.TIME.2025.13},
  URN =		{urn:nbn:de:0030-drops-244595},
  doi =		{10.4230/LIPIcs.TIME.2025.13},
  annote =	{Keywords: Temporal ensemble logic, Clinical trials, Logic-based modeling}
}
  • Refine by Type
  • 74 Document/PDF
  • 57 Document/HTML

  • Refine by Publication Year
  • 6 2026
  • 40 2025
  • 4 2024
  • 11 2023
  • 5 2022
  • Show More...

  • Refine by Author
  • 7 Sun, He
  • 3 Biswas, Russa
  • 3 Feng, Weiming
  • 3 Lissandrini, Matteo
  • 3 Saranurak, Thatchaphol
  • Show More...

  • Refine by Series/Journal
  • 45 LIPIcs
  • 8 OASIcs
  • 3 LITES
  • 18 TGDK

  • Refine by Classification
  • 5 Computing methodologies → Machine learning
  • 5 Computing methodologies → Semantic networks
  • 5 Theory of computation → Design and analysis of algorithms
  • 5 Theory of computation → Dynamic graph algorithms
  • 4 Computing methodologies → Knowledge representation and reasoning
  • Show More...

  • Refine by Keyword
  • 3 Knowledge Graphs
  • 3 Knowledge graphs
  • 3 Large Language Models
  • 2 (Directed) hypergraphs
  • 2 Data structures
  • Show More...

Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail