23 Search Results for "Yan, Ying"


Document
Survey
Resilience in Knowledge Graph Embeddings

Authors: Arnab Sharma, N'Dah Jean Kouagou, and Axel-Cyrille Ngonga Ngomo

Published in: TGDK, Volume 3, Issue 2 (2025). Transactions on Graph Data and Knowledge, Volume 3, Issue 2


Abstract
In recent years, knowledge graphs have gained interest and witnessed widespread applications in various domains, such as information retrieval, question-answering, recommendation systems, amongst others. Large-scale knowledge graphs to this end have demonstrated their utility in effectively representing structured knowledge. To further facilitate the application of machine learning techniques, knowledge graph embedding models have been developed. Such models can transform entities and relationships within knowledge graphs into vectors. However, these embedding models often face challenges related to noise, missing information, distribution shift, adversarial attacks, etc. This can lead to sub-optimal embeddings and incorrect inferences, thereby negatively impacting downstream applications. While the existing literature has focused so far on adversarial attacks on KGE models, the challenges related to the other critical aspects remain unexplored. In this paper, we, first of all, give a unified definition of resilience, encompassing several factors such as generalisation, in-distribution generalization, distribution adaption, and robustness. After formalizing these concepts for machine learning in general, we define them in the context of knowledge graphs. To find the gap in the existing works on resilience in the context of knowledge graphs, we perform a systematic survey, taking into account all these aspects mentioned previously. Our survey results show that most of the existing works focus on a specific aspect of resilience, namely robustness. After categorizing such works based on their respective aspects of resilience, we discuss the challenges and future research directions.

Cite as

Arnab Sharma, N'Dah Jean Kouagou, and Axel-Cyrille Ngonga Ngomo. Resilience in Knowledge Graph Embeddings. In Transactions on Graph Data and Knowledge (TGDK), Volume 3, Issue 2, pp. 1:1-1:38, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Article{sharma_et_al:TGDK.3.2.1,
  author =	{Sharma, Arnab and Kouagou, N'Dah Jean and Ngomo, Axel-Cyrille Ngonga},
  title =	{{Resilience in Knowledge Graph Embeddings}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{1:1--1:38},
  ISSN =	{2942-7517},
  year =	{2025},
  volume =	{3},
  number =	{2},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.3.2.1},
  URN =		{urn:nbn:de:0030-drops-248117},
  doi =		{10.4230/TGDK.3.2.1},
  annote =	{Keywords: Knowledge graphs, Resilience, Robustness}
}
Document
Large Multi-Modal Model Cartographic Map Comprehension for Textual Locality Georeferencing

Authors: Kalana Wijegunarathna, Kristin Stock, and Christopher B. Jones

Published in: LIPIcs, Volume 346, 13th International Conference on Geographic Information Science (GIScience 2025)


Abstract
Millions of biological sample records collected in the last few centuries archived in natural history collections are un-georeferenced. Georeferencing complex locality descriptions associated with these collection samples is a highly labour-intensive task collection agencies struggle with. None of the existing automated methods exploit maps that are an essential tool for georeferencing complex relations. We present preliminary experiments and results of a novel method that exploits multi-modal capabilities of recent Large Multi-Modal Models (LMM). This method enables the model to visually contextualize spatial relations it reads in the locality description. We use a grid-based approach to adapt these auto-regressive models for this task in a zero-shot setting. Our experiments conducted on a small manually annotated dataset show impressive results for our approach (∼1 km Average distance error) compared to uni-modal georeferencing with Large Language Models and existing georeferencing tools. The paper also discusses the findings of the experiments in light of an LMM’s ability to comprehend fine-grained maps. Motivated by these results, a practical framework is proposed to integrate this method into a georeferencing workflow.

Cite as

Kalana Wijegunarathna, Kristin Stock, and Christopher B. Jones. Large Multi-Modal Model Cartographic Map Comprehension for Textual Locality Georeferencing. In 13th International Conference on Geographic Information Science (GIScience 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 346, pp. 12:1-12:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{wijegunarathna_et_al:LIPIcs.GIScience.2025.12,
  author =	{Wijegunarathna, Kalana and Stock, Kristin and Jones, Christopher B.},
  title =	{{Large Multi-Modal Model Cartographic Map Comprehension for Textual Locality Georeferencing}},
  booktitle =	{13th International Conference on Geographic Information Science (GIScience 2025)},
  pages =	{12:1--12:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-378-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{346},
  editor =	{Sila-Nowicka, Katarzyna and Moore, Antoni and O'Sullivan, David and Adams, Benjamin and Gahegan, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.GIScience.2025.12},
  URN =		{urn:nbn:de:0030-drops-238412},
  doi =		{10.4230/LIPIcs.GIScience.2025.12},
  annote =	{Keywords: Large Multi-Modal Models, Large Language Models, LLM, Georeferencing, Natural History collections}
}
Document
Precomputed Topological Relations for Integrated Geospatial Analysis Across Knowledge Graphs

Authors: Katrina Schweikert, David K. Kedrowski, Shirly Stephen, and Torsten Hahmann

Published in: LIPIcs, Volume 346, 13th International Conference on Geographic Information Science (GIScience 2025)


Abstract
Geospatial Knowledge Graphs (GeoKGs) represent a significant advancement in the integration of AI-driven geographic information, facilitating interoperable and semantically rich geospatial analytics across various domains. This paper explores the use of topologically enriched GeoKGs, built on an explicit representation of S2 Geometry alongside precomputed topological relations, for constructing efficient geospatial analysis workflows within and across knowledge graphs (KGs). Using the SAWGraph knowledge graph as a case study focused on enviromental contamination by PFAS, we demonstrate how this framework supports fundamental GIS operations - such as spatial filtering, proximity analysis, overlay operations and network analysis - in a GeoKG setting while allowing for the easy linking of these operations with one another and with semantic filters. This enables the efficient execution of complex geospatial analyses as semantically-explicit queries and enhances the usability of geospatial data across graphs. Additionally, the framework eliminates the need for explicit support for GeoSPARQL’s topological operations in the utilized graph databases and better integrates spatial knowledge into the overall semantic inference process supported by RDFS and OWL ontologies.

Cite as

Katrina Schweikert, David K. Kedrowski, Shirly Stephen, and Torsten Hahmann. Precomputed Topological Relations for Integrated Geospatial Analysis Across Knowledge Graphs. In 13th International Conference on Geographic Information Science (GIScience 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 346, pp. 4:1-4:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{schweikert_et_al:LIPIcs.GIScience.2025.4,
  author =	{Schweikert, Katrina and Kedrowski, David K. and Stephen, Shirly and Hahmann, Torsten},
  title =	{{Precomputed Topological Relations for Integrated Geospatial Analysis Across Knowledge Graphs}},
  booktitle =	{13th International Conference on Geographic Information Science (GIScience 2025)},
  pages =	{4:1--4:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-378-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{346},
  editor =	{Sila-Nowicka, Katarzyna and Moore, Antoni and O'Sullivan, David and Adams, Benjamin and Gahegan, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.GIScience.2025.4},
  URN =		{urn:nbn:de:0030-drops-238332},
  doi =		{10.4230/LIPIcs.GIScience.2025.4},
  annote =	{Keywords: knowledge graph, GeoKG, spatial analysis, ontology, SPARQL, GeoSPARQL, discrete global grid system, S2 geometry, GeoAI, PFAS}
}
Document
A Chatbot to Help Promoting Financial Literacy

Authors: Davi Silva Eleuterio, Pedro Filipe Oliveira, Paulo Matos, and Jorge Manuel Afonso Alves

Published in: OASIcs, Volume 135, 14th Symposium on Languages, Applications and Technologies (SLATE 2025)


Abstract
Currently, governments and many other institutions have been making significant efforts to promote financial literacy. However, a considerable portion of the population still lacks basic financial knowledge, highlighting the need for updated strategies to enhance financial education. In today’s digital world - where people often search for quick and convenient solutions - the development of a reliable and intelligent chatbot to answer questions related to financial concepts and decision-making can be very beneficial. This paper proposes the implementation of an automated web scraper to extract content from a trustworthy financial education website with plenty of useful concepts about finances, using this collected data to develop a chatbot which provides accurate and helpful responses to users. The solution is built using technologies such as Streamlit, Langchain, and OpenAI.

Cite as

Davi Silva Eleuterio, Pedro Filipe Oliveira, Paulo Matos, and Jorge Manuel Afonso Alves. A Chatbot to Help Promoting Financial Literacy. In 14th Symposium on Languages, Applications and Technologies (SLATE 2025). Open Access Series in Informatics (OASIcs), Volume 135, pp. 7:1-7:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{eleuterio_et_al:OASIcs.SLATE.2025.7,
  author =	{Eleuterio, Davi Silva and Oliveira, Pedro Filipe and Matos, Paulo and Alves, Jorge Manuel Afonso},
  title =	{{A Chatbot to Help Promoting Financial Literacy}},
  booktitle =	{14th Symposium on Languages, Applications and Technologies (SLATE 2025)},
  pages =	{7:1--7:9},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-387-4},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{135},
  editor =	{Baptista, Jorge and Barateiro, Jos\'{e}},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2025.7},
  URN =		{urn:nbn:de:0030-drops-236870},
  doi =		{10.4230/OASIcs.SLATE.2025.7},
  annote =	{Keywords: chatbot, financial literacy, web scraper, LLM, RAG}
}
Document
SimdMinimizers: Computing Random Minimizers, fast

Authors: Ragnar Groot Koerkamp and Igor Martayan

Published in: LIPIcs, Volume 338, 23rd International Symposium on Experimental Algorithms (SEA 2025)


Abstract
Motivation. Because of the rapidly-growing amount of sequencing data, computing sketches of large textual datasets has become an essential preprocessing task. These sketches are typically much smaller than the input sequences, but preserve sufficient information for downstream analysis. Minimizers are an especially popular sketching technique and used in a wide variety of applications. They sample at least one out of every w consecutive k-mers. As DNA sequencers are getting more accurate, some applications can afford to use a larger w and hence sparser and smaller sketches. And as sketches get smaller, their analysis becomes faster, so the time spent sketching the full-sized input becomes more of a bottleneck. Methods. Our library simd-minimizers implements a random minimizer algorithm using SIMD instructions. It supports both AVX2 and NEON architectures. Its main novelty is two-fold. First, it splits the input into 8 chunks that are streamed over in parallel through all steps of the algorithm. This is enabled by using the completely deterministic two-stacks sliding window minimum algorithm, which seems not to have been used before for finding minimizers. Results. Our library is up to 6.8× faster than a scalar implementation of the rescan method when w = 5 is small, and 3.4× faster for larger w = 19. Computing canonical minimizers is less than 50% slower than computing forward minimizers, and over 15× faster than the existing implementation in the minimizer-iter crate. Our library finds all (canonical) minimizers of a 3.2 Gbp human genome in 5.2 (resp. 6.7) seconds.

Cite as

Ragnar Groot Koerkamp and Igor Martayan. SimdMinimizers: Computing Random Minimizers, fast. In 23rd International Symposium on Experimental Algorithms (SEA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 338, pp. 20:1-20:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{grootkoerkamp_et_al:LIPIcs.SEA.2025.20,
  author =	{Groot Koerkamp, Ragnar and Martayan, Igor},
  title =	{{SimdMinimizers: Computing Random Minimizers, fast}},
  booktitle =	{23rd International Symposium on Experimental Algorithms (SEA 2025)},
  pages =	{20:1--20:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-375-1},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{338},
  editor =	{Mutzel, Petra and Prezza, Nicola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2025.20},
  URN =		{urn:nbn:de:0030-drops-232581},
  doi =		{10.4230/LIPIcs.SEA.2025.20},
  annote =	{Keywords: Minimizers, Randomized algorithms, Sketching, Hashing}
}
Document
LoRaHART: Hardware-Aware Real-Time Scheduling for LoRa

Authors: Soumya Ranjan Sahoo, Amalinda Gamage, Niraj Kumar, and Arvind Easwaran

Published in: LIPIcs, Volume 335, 37th Euromicro Conference on Real-Time Systems (ECRTS 2025)


Abstract
Time-sensitive data acquisition is critical for many Low-Power Wide-Area Network (LPWAN) applications, such as healthcare monitoring and industrial Internet of Things. Among the available LPWAN technologies, LoRa (Long Range) has emerged as a leading choice, offering kilometer-scale communication with minimal power consumption and enabling high-density deployments across large areas. However, the conventional ALOHA-based Medium Access Control (MAC) in LoRa is not designed to support real-time communication over large-scale networks. This paper introduces LoRaHART, a novel approach that overcomes two critical, under-explored limitations in Commercial Off The Shelf (COTS) LoRa gateways that impact real-time performance. LoRa gateways have limited capacity for demodulation of parallel transmissions and their antenna can either transmit or receive at any time instant. LoRaHART incorporates a hardware-aware super-frame structure, comprising both Time Division Multiple Access (TDMA) slots as well as opportunistic retransmissions using Carrier Sense Multiple Access (CSMA), designed to mitigate the above constraints. We use a partial packing and makespan minimization algorithm to schedule periodic real-time transmissions efficiently within the TDMA slots, and also develop a probabilistic node contention model for CSMA retransmissions, providing analytical guarantees for deadline satisfaction under ideal channel conditions. Our evaluation of LoRaHART on a 40-node LoRa testbed demonstrates significant improvements over existing solutions in practice, achieving an average Packet Reception Ratio of 98% and a 45% higher airtime utilization than the best performing baseline.

Cite as

Soumya Ranjan Sahoo, Amalinda Gamage, Niraj Kumar, and Arvind Easwaran. LoRaHART: Hardware-Aware Real-Time Scheduling for LoRa. In 37th Euromicro Conference on Real-Time Systems (ECRTS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 335, pp. 10:1-10:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{sahoo_et_al:LIPIcs.ECRTS.2025.10,
  author =	{Sahoo, Soumya Ranjan and Gamage, Amalinda and Kumar, Niraj and Easwaran, Arvind},
  title =	{{LoRaHART: Hardware-Aware Real-Time Scheduling for LoRa}},
  booktitle =	{37th Euromicro Conference on Real-Time Systems (ECRTS 2025)},
  pages =	{10:1--10:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-377-5},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{335},
  editor =	{Mancuso, Renato},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECRTS.2025.10},
  URN =		{urn:nbn:de:0030-drops-235880},
  doi =		{10.4230/LIPIcs.ECRTS.2025.10},
  annote =	{Keywords: LoRa, LPWAN, Real-time Scheduling, Hardware Constraints}
}
Document
Track A: Algorithms, Complexity and Games
Even Faster Algorithm for the Chamfer Distance

Authors: Ying Feng and Piotr Indyk

Published in: LIPIcs, Volume 334, 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)


Abstract
For two d-dimensional point sets A,B of size up to n, the Chamfer distance from A to B is defined as CH(A,B) = ∑_{a ∈ A} min_{b ∈ B} ‖a-b‖. The Chamfer distance is a widely used measure for quantifying dissimilarity between sets of points, used in many machine learning and computer vision applications. A recent work of Bakshi et al, NeuriPS'23, gave the first near-linear time (1+ε)-approximate algorithm, with a running time of 𝒪(nd log (n)/ε²). In this paper we improve the running time further, to 𝒪(nd(log log n+log1/(ε))/ε²)). When ε is a constant, this reduces the gap between the upper bound and the trivial Ω(dn) lower bound significantly, from 𝒪(log n) to 𝒪(log log n).

Cite as

Ying Feng and Piotr Indyk. Even Faster Algorithm for the Chamfer Distance. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 76:1-76:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{feng_et_al:LIPIcs.ICALP.2025.76,
  author =	{Feng, Ying and Indyk, Piotr},
  title =	{{Even Faster Algorithm for the Chamfer Distance}},
  booktitle =	{52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
  pages =	{76:1--76:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-372-0},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{334},
  editor =	{Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.76},
  URN =		{urn:nbn:de:0030-drops-234531},
  doi =		{10.4230/LIPIcs.ICALP.2025.76},
  annote =	{Keywords: Chamfer distance}
}
Document
Track B: Automata, Logic, Semantics, and Theory of Programming
Bayesian Inference in Quantum Programs

Authors: Christina Gehnen, Dominique Unruh, and Joost-Pieter Katoen

Published in: LIPIcs, Volume 334, 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)


Abstract
Conditioning is a key feature in probabilistic programming to enable modeling the influence of data (also known as observations) to the probability distribution described by such programs. Determining the posterior distribution is also known as Bayesian inference. This paper equips a quantum while-language with conditioning, defines its denotational and operational semantics over infinite-dimensional Hilbert spaces, and shows their equivalence. We provide sufficient conditions for the existence of weakest (liberal) precondition-transformers and derive inductive characterizations of these transformers. It is shown how w(l)p-transformers can be used to assess the effect of Bayesian inference on (possibly diverging) quantum programs.

Cite as

Christina Gehnen, Dominique Unruh, and Joost-Pieter Katoen. Bayesian Inference in Quantum Programs. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 157:1-157:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{gehnen_et_al:LIPIcs.ICALP.2025.157,
  author =	{Gehnen, Christina and Unruh, Dominique and Katoen, Joost-Pieter},
  title =	{{Bayesian Inference in Quantum Programs}},
  booktitle =	{52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
  pages =	{157:1--157:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-372-0},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{334},
  editor =	{Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.157},
  URN =		{urn:nbn:de:0030-drops-235345},
  doi =		{10.4230/LIPIcs.ICALP.2025.157},
  annote =	{Keywords: Quantum Program Logics, Weakest Preconditions, Bayesian Inference, Program Semantics}
}
Document
FuzzFlesh: Randomised Testing of Decompilers via Control Flow Graph-Based Program Generation

Authors: Amber Gorzynski and Alastair F. Donaldson

Published in: LIPIcs, Volume 333, 39th European Conference on Object-Oriented Programming (ECOOP 2025)


Abstract
Decompilation is the process of translating compiled code into high-level code. Control flow recovery is a challenging part of the process. "Misdecompilations" can occur, whereby the decompiled code does not accurately represent the semantics of the compiled code, despite it being syntactically valid. This is problematic because it can mislead users who are trying to reason about the program. We present CFG-based program generation: a novel approach to randomised testing that aims to improve the control flow recovery of decompilers. CFG-based program generation involves randomly generating control flow graphs (CFGs) and paths through each graph. Inspired by prior work in the domain of GPU computing, (CFG, path) pairs are "fleshed" into test programs. Each program is decompiled and recompiled. The test oracle verifies whether the actual runtime path through the graph matches the expected path. Any difference in the execution paths after recompilation indicates a possible misdecompilation. A key benefit of this approach is that it is largely independent of the source and target languages in question because it is focused on control flow. The approach is therefore applicable to numerous decompilation settings. The trade-off resulting from the focus on control flow is that misdecompilation bugs that do not relate to control flow (e.g. bugs that involve specific arithmetic operations) are out of scope. We have implemented this approach in FuzzFlesh, an open-source randomised testing tool. FuzzFlesh can be easily configured to target a variety of low-level languages and decompiler toolchains because most of the CFG and path generation process is language-independent. At present, FuzzFlesh supports testing decompilation of Java bytecode, .NET assembly and x86 machine code. In addition to program generation, FuzzFlesh also includes an automated test-case reducer that operates on the CFG rather than the low-level program, which means that it can be applied to any of the target languages. We present a large experimental campaign applying FuzzFlesh to a variety of decompilers, leading to the discovery of 12 previously-unknown bugs across two language formats, six of which have been fixed. We present experiments comparing our generic FuzzFlesh tool to two state-of-the-art decompiler testing tools targeted at specific languages. As expected, the coverage our generic FuzzFlesh tool achieves on a given decompiler is lower than the coverage achieved by a tool specifically designed for the input format of that decompiler. However, due to its focus on control flow, FuzzFlesh is able to cover sections of control flow recovery code that the targeted tools cannot reach, and identify control flow related bugs that the targeted tools miss.

Cite as

Amber Gorzynski and Alastair F. Donaldson. FuzzFlesh: Randomised Testing of Decompilers via Control Flow Graph-Based Program Generation. In 39th European Conference on Object-Oriented Programming (ECOOP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 333, pp. 13:1-13:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{gorzynski_et_al:LIPIcs.ECOOP.2025.13,
  author =	{Gorzynski, Amber and Donaldson, Alastair F.},
  title =	{{FuzzFlesh: Randomised Testing of Decompilers via Control Flow Graph-Based Program Generation}},
  booktitle =	{39th European Conference on Object-Oriented Programming (ECOOP 2025)},
  pages =	{13:1--13:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-373-7},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{333},
  editor =	{Aldrich, Jonathan and Silva, Alexandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2025.13},
  URN =		{urn:nbn:de:0030-drops-233062},
  doi =		{10.4230/LIPIcs.ECOOP.2025.13},
  annote =	{Keywords: Decompiler, Reverse Engineering, Control Flow, Software Testing, Fuzzing}
}
Document
Optimal Motion Planning for Two Square Robots in a Rectilinear Environment

Authors: Pankaj K. Agarwal, Mark de Berg, Benjamin Holmgren, Alex Steiger, and Martijn Struijs

Published in: LIPIcs, Volume 332, 41st International Symposium on Computational Geometry (SoCG 2025)


Abstract
Let W ⊂ ℝ² be a rectilinear polygonal environment (that is, a rectilinear polygon potentially with holes) with a total of n vertices, and let A,B be two robots, each modeled as an axis-aligned unit square, that can move rectilinearly inside W. The goal is to compute an optimal collision-free motion plan π for A and B between a given pair of source and target configurations. We study two variants of this problem and obtain the following results. - Min-Sum: Here the goal is to compute a motion plan that minimizes the sum of the lengths of the paths of the robots. We present an O(n⁴log n)-time algorithm for computing an optimal solution to the min-sum problem. This is the first polynomial-time algorithm to compute an optimal, collision-free motion of two robots amid obstacles in a planar polygonal environment. - Min-Makespan: Here the robots can move with at most unit speed, and the goal is to compute a motion plan that minimizes the maximum time taken by a robot to reach its target location. We prove that the min-makespan variant is NP-hard.

Cite as

Pankaj K. Agarwal, Mark de Berg, Benjamin Holmgren, Alex Steiger, and Martijn Struijs. Optimal Motion Planning for Two Square Robots in a Rectilinear Environment. In 41st International Symposium on Computational Geometry (SoCG 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 332, pp. 5:1-5:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{agarwal_et_al:LIPIcs.SoCG.2025.5,
  author =	{Agarwal, Pankaj K. and de Berg, Mark and Holmgren, Benjamin and Steiger, Alex and Struijs, Martijn},
  title =	{{Optimal Motion Planning for Two Square Robots in a Rectilinear Environment}},
  booktitle =	{41st International Symposium on Computational Geometry (SoCG 2025)},
  pages =	{5:1--5:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-370-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{332},
  editor =	{Aichholzer, Oswin and Wang, Haitao},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SoCG.2025.5},
  URN =		{urn:nbn:de:0030-drops-231577},
  doi =		{10.4230/LIPIcs.SoCG.2025.5},
  annote =	{Keywords: Computational geometry, motion planning, multiple robots, rectilinear paths}
}
Document
Generalized Inner Product Estimation with Limited Quantum Communication

Authors: Srinivasan Arunachalam and Louis Schatzki

Published in: LIPIcs, Volume 327, 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)


Abstract
In this work, we consider the fundamental task of distributed inner product estimation when allowed limited communication. Suppose Alice and Bob are given k copies of an unknown n-qubit quantum state |ψ⟩,|ϕ⟩ respectively, are allowed to send q qubits to one another, and the task is to estimate |⟨ψ|ϕ⟩|² up to constant additive error. We show that k = Θ(√{2^{n-q}}) copies are essentially necessary and sufficient for this task (extending the work of Anshu, Landau and Liu (STOC'22) who considered the case when q = 0). Additionally, we also consider the task when the goal of the players is to estimate |⟨ψ|M|ϕ⟩|², for arbitrary Hermitian M. For this task we show that certain norms on M determine the sample complexity of estimating |⟨ψ|M|ϕ⟩|² when using only classical communication.

Cite as

Srinivasan Arunachalam and Louis Schatzki. Generalized Inner Product Estimation with Limited Quantum Communication. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 11:1-11:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{arunachalam_et_al:LIPIcs.STACS.2025.11,
  author =	{Arunachalam, Srinivasan and Schatzki, Louis},
  title =	{{Generalized Inner Product Estimation with Limited Quantum Communication}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{11:1--11:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.11},
  URN =		{urn:nbn:de:0030-drops-228366},
  doi =		{10.4230/LIPIcs.STACS.2025.11},
  annote =	{Keywords: Quantum property testing, Quantum Distributed Algorithms}
}
Document
A Faster Algorithm for Constrained Correlation Clustering

Authors: Nick Fischer, Evangelos Kipouridis, Jonas Klausen, and Mikkel Thorup

Published in: LIPIcs, Volume 327, 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)


Abstract
In the Correlation Clustering problem we are given n nodes, and a preference for each pair of nodes indicating whether we prefer the two endpoints to be in the same cluster or not. The output is a clustering inducing the minimum number of violated preferences. In certain cases, however, the preference between some pairs may be too important to be violated. The constrained version of this problem specifies pairs of nodes that must be in the same cluster as well as pairs that must not be in the same cluster (hard constraints). The output clustering has to satisfy all hard constraints while minimizing the number of violated preferences. Constrained Correlation Clustering is APX-Hard and has been approximated within a factor 3 by van Zuylen et al. [SODA '07]. Their algorithm is based on rounding an LP with Θ(n³) constraints, resulting in an Ω(n^{3ω}) running time. In this work, using a more combinatorial approach, we show how to approximate this problem significantly faster at the cost of a slightly weaker approximation factor. In particular, our algorithm runs in Õ(n³) time (notice that the input size is Θ(n²)) and approximates Constrained Correlation Clustering within a factor 16. To achieve our result we need properties guaranteed by a particular influential algorithm for (unconstrained) Correlation Clustering, the CC-PIVOT algorithm. This algorithm chooses a pivot node u, creates a cluster containing u and all its preferred nodes, and recursively solves the rest of the problem. It is known that selecting pivots at random gives a 3-approximation. As a byproduct of our work, we provide a derandomization of the CC-PIVOT algorithm that still achieves the 3-approximation; furthermore, we show that there exist instances where no ordering of the pivots can give a (3-ε)-approximation, for any constant ε. Finally, we introduce a node-weighted version of Correlation Clustering, which can be approximated within factor 3 using our insights on Constrained Correlation Clustering. As the general weighted version of Correlation Clustering would require a major breakthrough to approximate within a factor o(log n), Node-Weighted Correlation Clustering may be a practical alternative.

Cite as

Nick Fischer, Evangelos Kipouridis, Jonas Klausen, and Mikkel Thorup. A Faster Algorithm for Constrained Correlation Clustering. In 42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 327, pp. 32:1-32:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{fischer_et_al:LIPIcs.STACS.2025.32,
  author =	{Fischer, Nick and Kipouridis, Evangelos and Klausen, Jonas and Thorup, Mikkel},
  title =	{{A Faster Algorithm for Constrained Correlation Clustering}},
  booktitle =	{42nd International Symposium on Theoretical Aspects of Computer Science (STACS 2025)},
  pages =	{32:1--32:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-365-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{327},
  editor =	{Beyersdorff, Olaf and Pilipczuk, Micha{\l} and Pimentel, Elaine and Thắng, Nguy\~{ê}n Kim},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2025.32},
  URN =		{urn:nbn:de:0030-drops-228585},
  doi =		{10.4230/LIPIcs.STACS.2025.32},
  annote =	{Keywords: Clustering, Constrained Correlation Clustering, Approximation}
}
Document
Formulations and Constructions of Remote State Preparation with Verifiability, with Applications

Authors: Jiayu Zhang

Published in: LIPIcs, Volume 325, 16th Innovations in Theoretical Computer Science Conference (ITCS 2025)


Abstract
Remote state preparation with verifiability (RSPV) is an important quantum cryptographic primitive [Alexandru Gheorghiu and Thomas Vidick, 2019; Jiayu Zhang, 2022]. In this primitive, a client would like to prepare a quantum state (sampled or chosen from a state family) on the server side, such that ideally the client knows its full description, while the server holds and only holds the state itself. In this work we make several contributions on its formulations, constructions and applications. In more detail: - We first work on the definitions and abstract properties of the RSPV problem. We select and compare different variants of definitions [Bennett et al., 2001; Alexandru Gheorghiu and Thomas Vidick, 2019; Jiayu Zhang, 2022; Alexandru Gheorghiu et al., 2022], and study their basic properties (like composability and amplification). - We also study a closely related question of how to certify the server’s operations (instead of solely the states). We introduce a new notion named remote operator application with verifiability (ROAV). We compare this notion with related existing definitions [Summers and Werner, 1987; Dominic Mayers and Andrew Chi-Chih Yao, 2004; Zhengfeng Ji et al., 2021; Tony Metger and Thomas Vidick, 2021; Anand Natarajan and Tina Zhang, 2023], study its abstract properties and leave its concrete constructions for further works. - Building on the abstract properties and existing results [Zvika Brakerski et al., 2023], we construct a series of new RSPV protocols. Our constructions not only simplify existing results [Alexandru Gheorghiu and Thomas Vidick, 2019] but also cover new state families, for example, states in the form of 1/√2 (|0⟩ + |x_0⟩ + |1⟩ |x_1⟩). All these constructions rely only on the existence of weak NTCF [Zvika Brakerski et al., 2020; Navid Alamati et al., 2022], without additional requirements like the adaptive hardcore bit property [Zvika Brakerski et al., 2018; Navid Alamati et al., 2022]. - As a further application, we show that the classical verification of quantum computations (CVQC) problem [Dorit Aharonov et al., 2010; Urmila Mahadev, 2018] could be constructed from assumptions on group actions [Navid Alamati et al., 2020]. This is achieved by combining our results on RSPV with group-action-based instantiation of weak NTCF [Navid Alamati et al., 2022], and then with the quantum-gadget-assisted quantum verification protocol [Ferracin et al., 2018].

Cite as

Jiayu Zhang. Formulations and Constructions of Remote State Preparation with Verifiability, with Applications. In 16th Innovations in Theoretical Computer Science Conference (ITCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 325, pp. 96:1-96:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{zhang:LIPIcs.ITCS.2025.96,
  author =	{Zhang, Jiayu},
  title =	{{Formulations and Constructions of Remote State Preparation with Verifiability, with Applications}},
  booktitle =	{16th Innovations in Theoretical Computer Science Conference (ITCS 2025)},
  pages =	{96:1--96:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-361-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{325},
  editor =	{Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2025.96},
  URN =		{urn:nbn:de:0030-drops-227245},
  doi =		{10.4230/LIPIcs.ITCS.2025.96},
  annote =	{Keywords: Quantum Cryptography, Remote State Preparation, Self-testing, Verification of Quantum Computations}
}
Document
Incentive Compatibility of Ethereum’s PoS Consensus Protocol

Authors: Ulysse Pavloff, Yackolley Amoussou-Guenou, and Sara Tucci-Piergiovanni

Published in: LIPIcs, Volume 324, 28th International Conference on Principles of Distributed Systems (OPODIS 2024)


Abstract
This paper investigates whether following the fork-choice rule in the Ethereum PoS consensus protocol constitutes a Nash equilibrium - i.e., whether the protocol that maintains the canonical chain in Ethereum is incentive-compatible. Specifically, we explore whether selfish participants may attempt to manipulate the fork-choice rule by forking out previous blocks and capturing the rewards associated with those blocks. Our analysis considers two strategies for participants: the obedient strategy, which adheres to the prescribed protocol, and the cunning strategy, which attempts to manipulate the fork-choice rule to gain more rewards. We evaluate the conditions under which selfish participants might deviate from the obedient strategy. We found that, in a synchronous system, following the prescribed fork-choice rule is incentive-compatible. However, in an eventually synchronous system, the protocol is eventually incentive-compatible - that is, only a limited number of proposers will find it profitable to fork the chain during the synchronous period. After this sequence of cunning proposers, subsequent proposers will find it more profitable to follow the protocol.

Cite as

Ulysse Pavloff, Yackolley Amoussou-Guenou, and Sara Tucci-Piergiovanni. Incentive Compatibility of Ethereum’s PoS Consensus Protocol. In 28th International Conference on Principles of Distributed Systems (OPODIS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 324, pp. 7:1-7:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{pavloff_et_al:LIPIcs.OPODIS.2024.7,
  author =	{Pavloff, Ulysse and Amoussou-Guenou, Yackolley and Tucci-Piergiovanni, Sara},
  title =	{{Incentive Compatibility of Ethereum’s PoS Consensus Protocol}},
  booktitle =	{28th International Conference on Principles of Distributed Systems (OPODIS 2024)},
  pages =	{7:1--7:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-360-7},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{324},
  editor =	{Bonomi, Silvia and Galletta, Letterio and Rivi\`{e}re, Etienne and Schiavoni, Valerio},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.OPODIS.2024.7},
  URN =		{urn:nbn:de:0030-drops-225431},
  doi =		{10.4230/LIPIcs.OPODIS.2024.7},
  annote =	{Keywords: Ethereum PoS, Game Theory, Block Reward}
}
Document
Optimal Multilevel Slashing for Blockchains

Authors: Kenan Wood, Hammurabi Mendes, and Jonad Pulaj

Published in: LIPIcs, Volume 324, 28th International Conference on Principles of Distributed Systems (OPODIS 2024)


Abstract
We present the notion of multilevel slashing, where proof-of-stake blockchain validators can obtain gradual levels of assurance that a certain block is bound to be finalized in a global consensus procedure, unless an increasing and optimally large number of Byzantine processes have their staked assets slashed - that is, deducted - due to provably incorrect behavior. Our construction is a highly parameterized generalization of combinatorial intersection systems based on finite projective spaces, with asymptotic high availability and optimal slashing properties. Even under weak conditions, we show that our construction has asymptotically optimal slashing properties with respect to message complexity and validator load; this result also illustrates a fundamental trade off between message complexity, load, and slashing. In addition, we show that any intersection system whose ground elements are disjoint subsets of nodes (e.g. "committees" in committee-based consensus protocols) has asymptotic high availability under similarly weak conditions. Finally, our multilevel construction gives the flexibility to blockchain validators to decide how many "levels" of finalization assurance they wish to obtain. This functionality can be seen either as (i) a form of an early, slashing-based block finalization; or (ii) a service to support reorg tolerance.

Cite as

Kenan Wood, Hammurabi Mendes, and Jonad Pulaj. Optimal Multilevel Slashing for Blockchains. In 28th International Conference on Principles of Distributed Systems (OPODIS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 324, pp. 8:1-8:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{wood_et_al:LIPIcs.OPODIS.2024.8,
  author =	{Wood, Kenan and Mendes, Hammurabi and Pulaj, Jonad},
  title =	{{Optimal Multilevel Slashing for Blockchains}},
  booktitle =	{28th International Conference on Principles of Distributed Systems (OPODIS 2024)},
  pages =	{8:1--8:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-360-7},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{324},
  editor =	{Bonomi, Silvia and Galletta, Letterio and Rivi\`{e}re, Etienne and Schiavoni, Valerio},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.OPODIS.2024.8},
  URN =		{urn:nbn:de:0030-drops-225445},
  doi =		{10.4230/LIPIcs.OPODIS.2024.8},
  annote =	{Keywords: Blockchains, Finality, Slashablility, Committees, Availability}
}
  • Refine by Type
  • 23 Document/PDF
  • 20 Document/HTML

  • Refine by Publication Year
  • 15 2025
  • 3 2024
  • 4 2023
  • 1 2020

  • Refine by Author
  • 2 Biswas, Russa
  • 2 Chen, Jiaoyan
  • 2 Cochez, Michael
  • 2 Lissandrini, Matteo
  • 2 Scherp, Ansgar
  • Show More...

  • Refine by Series/Journal
  • 13 LIPIcs
  • 2 OASIcs
  • 1 LITES
  • 7 TGDK

  • Refine by Classification
  • 3 Computing methodologies → Knowledge representation and reasoning
  • 2 Computing methodologies → Ontology engineering
  • 2 General and reference → Surveys and overviews
  • 2 Information systems → Graph-based database models
  • 2 Information systems → Semantic web description languages
  • Show More...

  • Refine by Keyword
  • 3 Knowledge Graphs
  • 2 Knowledge graphs
  • 2 LLM
  • 2 Large Language Models
  • 1 802.1Qcr
  • Show More...

Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail