28 Search Results for "Lin, Yue"


Document
Conversational Agents: A Framework for Evaluation (CAFE) (Dagstuhl Perspectives Workshop 24352)

Authors: Christine Bauer, Li Chen, Nicola Ferro, Norbert Fuhr, Avishek Anand, Timo Breuer, Guglielmo Faggioli, Ophir Frieder, Hideo Joho, Jussi Karlgren, Johannes Kiesel, Bart P. Knijnenburg, Aldo Lipani, Lien Michiels, Andrea Papenmeier, Maria Soledad Pera, Mark Sanderson, Scott Sanner, Benno Stein, Johanne R. Trippas, Karin Verspoor, and Martijn C. Willemsen

Published in: Dagstuhl Manifestos, Volume 11, Issue 1 (2025)


Abstract
During the workshop, we deeply discussed what CONversational Information ACcess (CONIAC) is and its unique features, proposing a world model abstracting it, and defined the Conversational Agents Framework for Evaluation (CAFE) for the evaluation of CONIAC systems, consisting of six major components: 1) goals of the system’s stakeholders, 2) user tasks to be studied in the evaluation, 3) aspects of the users carrying out the tasks, 4) evaluation criteria to be considered, 5) evaluation methodology to be applied, and 6) measures for the quantitative criteria chosen.

Cite as

Christine Bauer, Li Chen, Nicola Ferro, Norbert Fuhr, Avishek Anand, Timo Breuer, Guglielmo Faggioli, Ophir Frieder, Hideo Joho, Jussi Karlgren, Johannes Kiesel, Bart P. Knijnenburg, Aldo Lipani, Lien Michiels, Andrea Papenmeier, Maria Soledad Pera, Mark Sanderson, Scott Sanner, Benno Stein, Johanne R. Trippas, Karin Verspoor, and Martijn C. Willemsen. Conversational Agents: A Framework for Evaluation (CAFE) (Dagstuhl Perspectives Workshop 24352). In Dagstuhl Manifestos, Volume 11, Issue 1, pp. 19-67, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Article{bauer_et_al:DagMan.11.1.19,
  author =	{Bauer, Christine and Chen, Li and Ferro, Nicola and Fuhr, Norbert and Anand, Avishek and Breuer, Timo and Faggioli, Guglielmo and Frieder, Ophir and Joho, Hideo and Karlgren, Jussi and Kiesel, Johannes and Knijnenburg, Bart P. and Lipani, Aldo and Michiels, Lien and Papenmeier, Andrea and Pera, Maria Soledad and Sanderson, Mark and Sanner, Scott and Stein, Benno and Trippas, Johanne R. and Verspoor, Karin and Willemsen, Martijn C.},
  title =	{{Conversational Agents: A Framework for Evaluation (CAFE) (Dagstuhl Perspectives Workshop 24352)}},
  pages =	{19--67},
  journal =	{Dagstuhl Manifestos},
  ISSN =	{2193-2433},
  year =	{2025},
  volume =	{11},
  number =	{1},
  editor =	{Bauer, Christine and Chen, Li and Ferro, Nicola and Fuhr, Norbert and Anand, Avishek and Breuer, Timo and Faggioli, Guglielmo and Frieder, Ophir and Joho, Hideo and Karlgren, Jussi and Kiesel, Johannes and Knijnenburg, Bart P. and Lipani, Aldo and Michiels, Lien and Papenmeier, Andrea and Pera, Maria Soledad and Sanderson, Mark and Sanner, Scott and Stein, Benno and Trippas, Johanne R. and Verspoor, Karin and Willemsen, Martijn C.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagMan.11.1.19},
  URN =		{urn:nbn:de:0030-drops-252722},
  doi =		{10.4230/DagMan.11.1.19},
  annote =	{Keywords: Conversational Agents, Evaluation, Information Access}
}
Document
Survey
Resilience in Knowledge Graph Embeddings

Authors: Arnab Sharma, N'Dah Jean Kouagou, and Axel-Cyrille Ngonga Ngomo

Published in: TGDK, Volume 3, Issue 2 (2025). Transactions on Graph Data and Knowledge, Volume 3, Issue 2


Abstract
In recent years, knowledge graphs have gained interest and witnessed widespread applications in various domains, such as information retrieval, question-answering, recommendation systems, amongst others. Large-scale knowledge graphs to this end have demonstrated their utility in effectively representing structured knowledge. To further facilitate the application of machine learning techniques, knowledge graph embedding models have been developed. Such models can transform entities and relationships within knowledge graphs into vectors. However, these embedding models often face challenges related to noise, missing information, distribution shift, adversarial attacks, etc. This can lead to sub-optimal embeddings and incorrect inferences, thereby negatively impacting downstream applications. While the existing literature has focused so far on adversarial attacks on KGE models, the challenges related to the other critical aspects remain unexplored. In this paper, we, first of all, give a unified definition of resilience, encompassing several factors such as generalisation, in-distribution generalization, distribution adaption, and robustness. After formalizing these concepts for machine learning in general, we define them in the context of knowledge graphs. To find the gap in the existing works on resilience in the context of knowledge graphs, we perform a systematic survey, taking into account all these aspects mentioned previously. Our survey results show that most of the existing works focus on a specific aspect of resilience, namely robustness. After categorizing such works based on their respective aspects of resilience, we discuss the challenges and future research directions.

Cite as

Arnab Sharma, N'Dah Jean Kouagou, and Axel-Cyrille Ngonga Ngomo. Resilience in Knowledge Graph Embeddings. In Transactions on Graph Data and Knowledge (TGDK), Volume 3, Issue 2, pp. 1:1-1:38, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Article{sharma_et_al:TGDK.3.2.1,
  author =	{Sharma, Arnab and Kouagou, N'Dah Jean and Ngomo, Axel-Cyrille Ngonga},
  title =	{{Resilience in Knowledge Graph Embeddings}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{1:1--1:38},
  ISSN =	{2942-7517},
  year =	{2025},
  volume =	{3},
  number =	{2},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.3.2.1},
  URN =		{urn:nbn:de:0030-drops-248117},
  doi =		{10.4230/TGDK.3.2.1},
  annote =	{Keywords: Knowledge graphs, Resilience, Robustness}
}
Document
Research
GraphRAG on Technical Documents - Impact of Knowledge Graph Schema

Authors: Henri Scaffidi, Melinda Hodkiewicz, Caitlin Woods, and Nicole Roocke

Published in: TGDK, Volume 3, Issue 2 (2025). Transactions on Graph Data and Knowledge, Volume 3, Issue 2


Abstract
Retrieval Augmented Generation (RAG) is seeing rapid adoption in industry to enable employees to query information captured in proprietary data for their organisation. In this work, we test the impact of domain-relevant knowledge graph schemas on the results of Microsoft’s GraphRAG pipeline. Our approach aims to address the poor quality of GraphRAG responses on technical reports rich in domain-specific terms. The use case involves technical reports about geology, chemistry and mineral processing published by the Minerals Research Institute of Western Australia (MRIWA). Four schemas are considered: a simple five-class minerals domain expert-developed schema, an expanded minerals domain schema, the Microsoft GraphRAG auto-generated schema, and a schema-less GraphRAG. These are compared to a conventional baseline RAG. Performance is evaluated using a scoring approach that accounts for the mix of correct, incorrect, additional, and missing content in RAG responses. The results show that the simple five-class minerals domain schema extracts approximately 10% more entities from the MRIWA reports than the other schema options. Additionally, both the five-class and the expanded eight-class minerals domain schemas produce the most factually correct answers and the fewest hallucinations. We attribute this to the minerals-specific schemas extracting more relevant, domain-specific information during the Indexing stage. As a result, the Query stage’s context window includes more high-value content. This contributes to the observed improvement in answer quality compared to the other pipelines. In contrast, pipelines with fewer domain-related entities in the KG retrieve less valuable information, leaving more room for irrelevant content in the context window. Baseline RAG responses were typically shorter, less complete, and contained more hallucinations compared to our GraphRAG pipelines. We provide a complete set of resources at https://github.com/nlp-tlp/GraphRAG-on-Minerals-Domain/tree/main. These resources include links to the MRIWA reports, a set of questions (from simple to challenging) along with domain-expert curated answers, schemas, and evaluations of the pipelines.

Cite as

Henri Scaffidi, Melinda Hodkiewicz, Caitlin Woods, and Nicole Roocke. GraphRAG on Technical Documents - Impact of Knowledge Graph Schema. In Transactions on Graph Data and Knowledge (TGDK), Volume 3, Issue 2, pp. 3:1-3:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Article{scaffidi_et_al:TGDK.3.2.3,
  author =	{Scaffidi, Henri and Hodkiewicz, Melinda and Woods, Caitlin and Roocke, Nicole},
  title =	{{GraphRAG on Technical Documents - Impact of Knowledge Graph Schema}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{3:1--3:24},
  ISSN =	{2942-7517},
  year =	{2025},
  volume =	{3},
  number =	{2},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.3.2.3},
  URN =		{urn:nbn:de:0030-drops-248131},
  doi =		{10.4230/TGDK.3.2.3},
  annote =	{Keywords: RAG, minerals, local search, global search, entity extraction, competency questions}
}
Document
Canonical for Automated Theorem Proving in Lean

Authors: Chase Norman and Jeremy Avigad

Published in: LIPIcs, Volume 352, 16th International Conference on Interactive Theorem Proving (ITP 2025)


Abstract
Canonical is a solver for type inhabitation in dependent type theory, that is, the problem of producing a term of a given type. We present a Lean tactic which invokes Canonical to generate proof terms and synthesize programs. The tactic supports higher-order and dependently-typed goals, structural recursion over indexed inductive types, and definitional equality. Canonical finds proofs for 84% of Natural Number Game problems in 51 seconds total.

Cite as

Chase Norman and Jeremy Avigad. Canonical for Automated Theorem Proving in Lean. In 16th International Conference on Interactive Theorem Proving (ITP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 352, pp. 14:1-14:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{norman_et_al:LIPIcs.ITP.2025.14,
  author =	{Norman, Chase and Avigad, Jeremy},
  title =	{{Canonical for Automated Theorem Proving in Lean}},
  booktitle =	{16th International Conference on Interactive Theorem Proving (ITP 2025)},
  pages =	{14:1--14:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-396-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{352},
  editor =	{Forster, Yannick and Keller, Chantal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITP.2025.14},
  URN =		{urn:nbn:de:0030-drops-246128},
  doi =		{10.4230/LIPIcs.ITP.2025.14},
  annote =	{Keywords: Automated Reasoning, Interactive Theorem Proving, Dependent Type Theory, Inhabitation, Unification, Program Synthesis, Formal Methods}
}
Document
Gaze Beyond Limits: Integrating Eye-Tracking and Augmented Reality for Next-Generation Spacesuit Interaction

Authors: Jiayu He, Yifan Li, Oliver R. Runswick, Peter D. Hodkinson, Jarle Steinberg, Felix Gorbatsevich, and Yang Gao

Published in: OASIcs, Volume 130, Advancing Human-Computer Interaction for Space Exploration (SpaceCHI 2025)


Abstract
Extravehicular activities (EVAs) are increasingly frequent in human spaceflight, particularly in spacecraft maintenance, scientific research, and planetary exploration. Spacesuits are essential for sustaining astronauts in the harsh environment of space, making their design a key factor in the success of EVA missions. The development of spacesuit technology has traditionally been driven by highly engineered solutions focused on life support, mission adaptability and operational efficiency. Modern spacesuits prioritize maintaining optimal internal temperature, humidity and pressure, as well as withstanding extreme temperature fluctuations and providing robust protection against micrometeoroid impacts and space debris. However, their bulkiness and rigidity impose significant physical strain on astronauts, reducing mobility and dexterity, particularly in tasks requiring fine motor control. The restricted field of view further complicates situational awareness, increasing the cognitive load during high-precision operations. While traditional spacesuits support basic EVA tasks, future space exploration shifting toward long-duration lunar and Martian surface missions demand more adaptive, intelligent, and astronaut-centric designs to overcome current constraints. To explore a next-generation spacesuit, this paper proposed an in-process eye-tracking embedded Augmented Reality (AR) Spacesuit System to enhance astronaut-environment interactions. By leveraging Segment-Anything Models (SAM) and Vision-Language Models (VLMs), we demonstrate a four-step approach to enable top-down gaze detection to minimize erroneous fixation data, gaze-based segmentation of objects of interest, real-time contextual assistance via AR overlays and hands-free operation within the spacesuit. This approach enhances real-time situational awareness and improves EVA task efficiency. We conclude with an exploration of the AR Helmet System’s potential in revolutionizing human-space interaction paradigms for future long-duration deep-space missions and discuss the further optimization of eye-tracking interactions using VLMs to predict astronaut intent and highlight relevant objects preemptively.

Cite as

Jiayu He, Yifan Li, Oliver R. Runswick, Peter D. Hodkinson, Jarle Steinberg, Felix Gorbatsevich, and Yang Gao. Gaze Beyond Limits: Integrating Eye-Tracking and Augmented Reality for Next-Generation Spacesuit Interaction. In Advancing Human-Computer Interaction for Space Exploration (SpaceCHI 2025). Open Access Series in Informatics (OASIcs), Volume 130, pp. 29:1-29:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{he_et_al:OASIcs.SpaceCHI.2025.29,
  author =	{He, Jiayu and Li, Yifan and Runswick, Oliver R. and Hodkinson, Peter D. and Steinberg, Jarle and Gorbatsevich, Felix and Gao, Yang},
  title =	{{Gaze Beyond Limits: Integrating Eye-Tracking and Augmented Reality for Next-Generation Spacesuit Interaction}},
  booktitle =	{Advancing Human-Computer Interaction for Space Exploration (SpaceCHI 2025)},
  pages =	{29:1--29:15},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-384-3},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{130},
  editor =	{Bensch, Leonie and Nilsson, Tommy and Nisser, Martin and Pataranutaporn, Pat and Schmidt, Albrecht and Sumini, Valentina},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SpaceCHI.2025.29},
  URN =		{urn:nbn:de:0030-drops-240197},
  doi =		{10.4230/OASIcs.SpaceCHI.2025.29},
  annote =	{Keywords: Augmented Reality (AR), Eye-Tracking, Cognitive Load/Workload, Segment Anything Model (SAM), Visual Language Models (VLMs)}
}
Document
In-Browser C++ Interpreter for Lightweight Intelligent Programming Learning Environments

Authors: Tomas Blažauskas, Arnoldas Rauba, Jakub Swacha, Raffaele Montella, and Rytis Maskeliunas

Published in: OASIcs, Volume 133, 6th International Computer Programming Education Conference (ICPEC 2025)


Abstract
The paper presents a browser native C++ interpreter integrated into an AI-assisted educational platform designed to enhance programming learning in formal education. The interpreter leverages Parsing Expression Grammars (PEG) to generate Abstract Syntax Trees (AST) and executes C++ code using a TypeScript-based runtime. The system supports key C++ features, including pointer arithmetic, function overloading, and namespace resolution, and emulates memory management via reference-counted JavaScript objects. Integrated within a web-based learning environment, it provides automated feedback, error explanations, and code quality evaluations. The evaluation involved 4582 students in three difficulty levels and feedback from 14 teachers. The results include high system usability scale (SUS) scores (avg. 83.5) and WBLT learning effectiveness scores (avg. 4.58/5). Interpreter performance testing in 65 cases averaged under 10 ms per task, confirming its practical applicability to school curricula. The system supports SCORM and PWA deployment, enabling LMS-independent usage. The work introduces a technical innovation in browser-based C++ execution and a scalable framework for LLM-enhanced programming pedagogy.

Cite as

Tomas Blažauskas, Arnoldas Rauba, Jakub Swacha, Raffaele Montella, and Rytis Maskeliunas. In-Browser C++ Interpreter for Lightweight Intelligent Programming Learning Environments. In 6th International Computer Programming Education Conference (ICPEC 2025). Open Access Series in Informatics (OASIcs), Volume 133, pp. 14:1-14:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{blazauskas_et_al:OASIcs.ICPEC.2025.14,
  author =	{Bla\v{z}auskas, Tomas and Rauba, Arnoldas and Swacha, Jakub and Montella, Raffaele and Maskeliunas, Rytis},
  title =	{{In-Browser C++ Interpreter for Lightweight Intelligent Programming Learning Environments}},
  booktitle =	{6th International Computer Programming Education Conference (ICPEC 2025)},
  pages =	{14:1--14:15},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-393-5},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{133},
  editor =	{Queir\'{o}s, Ricardo and Pinto, M\'{a}rio and Portela, Filipe and Sim\~{o}es, Alberto},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICPEC.2025.14},
  URN =		{urn:nbn:de:0030-drops-240449},
  doi =		{10.4230/OASIcs.ICPEC.2025.14},
  annote =	{Keywords: C++ interpreter, browser-based execution, programming education, LLM-assisted learning, PEG, AST, TypeScript runtime}
}
Document
MetaDORAM: Info-Theoretic Distributed ORAM with Less Communication

Authors: Brett Hemenway Falk, Daniel Noble, and Rafail Ostrovsky

Published in: LIPIcs, Volume 343, 6th Conference on Information-Theoretic Cryptography (ITC 2025)


Abstract
A Distributed Oblivious RAM is a multi-party protocol that securely implements a RAM functionality on secret-shared inputs and outputs. This paper presents two information-theoretically secure DORAMs whose communication costs are asymptotic improvements over the state of the art. Let n be the number of memory locations and let d be the bit-length of each location. The first, MetaDORAM1, is statistically secure, with n^{-ω(1)} leakage. It has amortized O(log_b(n) d + b ω(1) log(n) + log³(n)/log(log(n))) bits of communication per memory access. Here, b ≥ 2 is a free parameter and ω(1) is any super-constant function (in n). The most communication-efficient prior statistically secure DORAM was that of Abraham et al (PKC 2017), which has cost O(log_b(n) d + b ω(1) log_b(n) log²(n)). MetaDORAM1 is a Θ(ω(1) log(log(n)))-factor improvement over the work of Abraham et al whenever d = O(log²(n)). The second protocol, MetaDORAM2, achieves perfect security. It has amortized communication cost O(log_b(n)d + b log(n) + log³(n)/log(log(n))) where, again, b ≥ 2 is a free parameter. The best prior perfectly secure DORAM is that of Chan et al (ASIACRYPT 2018) which has communication cost O(log(n) d + log³(n)). MetaDORAM2 is therefore a Ω(log(log(n)))-factor improvement over the DORAM of Chan et al under any parameter range (by setting b = log(n)) and is a Θ(log(n))-factor improvement for d = Ω(n^ε) for any constant ε > 0 (by setting b = d/log(n)). Our work is the first perfectly secure DORAM with sub-logarithmic communication overhead. MetaDORAM2 comes at the cost of a once-off (for any given n) setup phase which requires exponential (in n) computation. Both DORAMs are in the 3-party setting with security against 1 semi-honest, static corruption. By a trivial transformation, these can be transformed, respectively, into statistically and perfectly secure active 3-server ORAM protocols secure against 1 corrupt server, with the same communication costs. These multi-server ORAM protocols are likewise asymptotic improvements over the state of the art.

Cite as

Brett Hemenway Falk, Daniel Noble, and Rafail Ostrovsky. MetaDORAM: Info-Theoretic Distributed ORAM with Less Communication. In 6th Conference on Information-Theoretic Cryptography (ITC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 343, pp. 6:1-6:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{falk_et_al:LIPIcs.ITC.2025.6,
  author =	{Falk, Brett Hemenway and Noble, Daniel and Ostrovsky, Rafail},
  title =	{{MetaDORAM: Info-Theoretic Distributed ORAM with Less Communication}},
  booktitle =	{6th Conference on Information-Theoretic Cryptography (ITC 2025)},
  pages =	{6:1--6:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-385-0},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{343},
  editor =	{Gilboa, Niv},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITC.2025.6},
  URN =		{urn:nbn:de:0030-drops-243560},
  doi =		{10.4230/LIPIcs.ITC.2025.6},
  annote =	{Keywords: ORAM, MPC, DORAM, multi-server ORAM, active ORAM}
}
Document
Cutoff Theorems for the Equivalence of Parameterized Quantum Circuits

Authors: Neil J. Ross and Scott Wesley

Published in: LIPIcs, Volume 345, 50th International Symposium on Mathematical Foundations of Computer Science (MFCS 2025)


Abstract
Many promising quantum algorithms in economics, medical science, and material science rely on circuits that are parameterized by a large number of angles. To ensure that these algorithms are efficient, these parameterized circuits must be heavily optimized. However, most quantum circuit optimizers are not verified, so this procedure is known to be error-prone. For this reason, there is growing interest in the design of equivalence checking algorithms for parameterized quantum circuits. In this paper, we define a generalized class of parameterized circuits with arbitrary rotations and show that this problem is decidable for cyclotomic gate sets. We propose a cutoff-based procedure which reduces the problem of verifying the equivalence of parameterized quantum circuits to the problem of verifying the equivalence of finitely many parameter-free quantum circuits. Because the number of parameter-free circuits grows exponentially with the number of parameters, we also propose a probabilistic variant of the algorithm for cases when the number of parameters is intractably large. We show that our techniques extend to equivalence modulo global phase, and describe an efficient angle sampling procedure for cyclotomic gate sets.

Cite as

Neil J. Ross and Scott Wesley. Cutoff Theorems for the Equivalence of Parameterized Quantum Circuits. In 50th International Symposium on Mathematical Foundations of Computer Science (MFCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 345, pp. 85:1-85:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{ross_et_al:LIPIcs.MFCS.2025.85,
  author =	{Ross, Neil J. and Wesley, Scott},
  title =	{{Cutoff Theorems for the Equivalence of Parameterized Quantum Circuits}},
  booktitle =	{50th International Symposium on Mathematical Foundations of Computer Science (MFCS 2025)},
  pages =	{85:1--85:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-388-1},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{345},
  editor =	{Gawrychowski, Pawe{\l} and Mazowiecki, Filip and Skrzypczak, Micha{\l}},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2025.85},
  URN =		{urn:nbn:de:0030-drops-241921},
  doi =		{10.4230/LIPIcs.MFCS.2025.85},
  annote =	{Keywords: Quantum Circuits, Parameterized Equivalence Checking}
}
Document
Large Multi-Modal Model Cartographic Map Comprehension for Textual Locality Georeferencing

Authors: Kalana Wijegunarathna, Kristin Stock, and Christopher B. Jones

Published in: LIPIcs, Volume 346, 13th International Conference on Geographic Information Science (GIScience 2025)


Abstract
Millions of biological sample records collected in the last few centuries archived in natural history collections are un-georeferenced. Georeferencing complex locality descriptions associated with these collection samples is a highly labour-intensive task collection agencies struggle with. None of the existing automated methods exploit maps that are an essential tool for georeferencing complex relations. We present preliminary experiments and results of a novel method that exploits multi-modal capabilities of recent Large Multi-Modal Models (LMM). This method enables the model to visually contextualize spatial relations it reads in the locality description. We use a grid-based approach to adapt these auto-regressive models for this task in a zero-shot setting. Our experiments conducted on a small manually annotated dataset show impressive results for our approach (∼1 km Average distance error) compared to uni-modal georeferencing with Large Language Models and existing georeferencing tools. The paper also discusses the findings of the experiments in light of an LMM’s ability to comprehend fine-grained maps. Motivated by these results, a practical framework is proposed to integrate this method into a georeferencing workflow.

Cite as

Kalana Wijegunarathna, Kristin Stock, and Christopher B. Jones. Large Multi-Modal Model Cartographic Map Comprehension for Textual Locality Georeferencing. In 13th International Conference on Geographic Information Science (GIScience 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 346, pp. 12:1-12:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{wijegunarathna_et_al:LIPIcs.GIScience.2025.12,
  author =	{Wijegunarathna, Kalana and Stock, Kristin and Jones, Christopher B.},
  title =	{{Large Multi-Modal Model Cartographic Map Comprehension for Textual Locality Georeferencing}},
  booktitle =	{13th International Conference on Geographic Information Science (GIScience 2025)},
  pages =	{12:1--12:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-378-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{346},
  editor =	{Sila-Nowicka, Katarzyna and Moore, Antoni and O'Sullivan, David and Adams, Benjamin and Gahegan, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.GIScience.2025.12},
  URN =		{urn:nbn:de:0030-drops-238412},
  doi =		{10.4230/LIPIcs.GIScience.2025.12},
  annote =	{Keywords: Large Multi-Modal Models, Large Language Models, LLM, Georeferencing, Natural History collections}
}
Document
Guiding Geospatial Analysis Processes in Dealing with Modifiable Areal Unit Problems

Authors: Guoray Cai and Yue Hao

Published in: LIPIcs, Volume 346, 13th International Conference on Geographic Information Science (GIScience 2025)


Abstract
Geospatial analysis has been widely applied in different domains for critical decision making. However, the results of spatial analysis are often plagued with uncertainties due to measurement errors, choice of data representations, and unintended transformation artifacts. A well known example of such problems is the Modifiable Areal Unit Problem (MAUP) which has well documented effects on the outcome of spatial analysis on area-aggregated data. Existing methods for addressing the effects of MAUP are limited, are technically complex, and are often inaccessible to practitioners. As a result, analysts tend to ignore the effects of MAUP in practice due to lack of expertise, high cognitive loads, and resource limitations. To address these challenges, this paper proposes a machine-guidance approach to augment the analyst’s capacity in mitigating the effect of MAUP. Based on an analysis of practical challenges faced by human analysts, we identified multiple opportunities for the machine to guide the analysts by alerting to the rise of MAUP, assessing the impact of MAUP, choosing mitigation methods, and generating visual guidance messages using GIS functions and tools. For each of the opportunities, we characterize the behavior patterns and the underlying guidance strategies that generate the behavior. We illustrate the behavior of machine guidance using a hotspot analysis scenario in the context of crime policing, where MAUP has strong effects on the patterns of crime hotspots. Finally, we describe the computational framework used to build a prototype guidance system and identify a number of research questions to be addressed. We conclude by discussing how the machine guidance approach could be an answer to some of the toughest problems in geospatial analysis.

Cite as

Guoray Cai and Yue Hao. Guiding Geospatial Analysis Processes in Dealing with Modifiable Areal Unit Problems. In 13th International Conference on Geographic Information Science (GIScience 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 346, pp. 14:1-14:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{cai_et_al:LIPIcs.GIScience.2025.14,
  author =	{Cai, Guoray and Hao, Yue},
  title =	{{Guiding Geospatial Analysis Processes in Dealing with Modifiable Areal Unit Problems}},
  booktitle =	{13th International Conference on Geographic Information Science (GIScience 2025)},
  pages =	{14:1--14:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-378-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{346},
  editor =	{Sila-Nowicka, Katarzyna and Moore, Antoni and O'Sullivan, David and Adams, Benjamin and Gahegan, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.GIScience.2025.14},
  URN =		{urn:nbn:de:0030-drops-238433},
  doi =		{10.4230/LIPIcs.GIScience.2025.14},
  annote =	{Keywords: Machine Guidance, Geo-Spatial Analysis, Modifiable Areal Unit Problem (MAUP)}
}
Document
A Modularity-Driven Framework for Unraveling Congestion Centers with Enhanced Spatial-Semantic Features

Authors: Weihua Huan, Xintao Liu, and Wei Huang

Published in: LIPIcs, Volume 346, 13th International Conference on Geographic Information Science (GIScience 2025)


Abstract
The propagation of traffic congestion is a complicated spatiotemporal phenomenon in urban networks. Extensive studies mainly relied on dynamic Bayesian network or deep learning approaches. However, they often struggle to adapt seamlessly to diverse data granularities, limiting their applicability. In this study, we propose a modularity-driven method to unravel the spatiotemporal congestion propagation centers, effectively addressing temporal granularity challenges through the use of the fast Fourier Transform (FFT). Our framework distinguishes itself due to its capacity to integrate enhanced spatial-semantic features while eliminating temporal granularity dependence, which consists of two data-driven modules. One is adaptive adjacency matrix learning module, which captures the spatiotemporal relationship from evolving congestion graphs by fusing node degree, spatial proximity, and the FFT of traffic state indices. The other one is local search module, which employs local dominance principles to unravel the congestion propagation centers. We validate our proposed methodology on the large-scale traffic networks in New York City, the United States. An ablation study on the dataset reveals that the combination of the three features achieves the highest modularity scores of 0.65. The contribution of our work is to provide a novel way to infer the propagation centers of traffic congestion, and reveals the flexibility of extending our framework at temporal scales. The network resilience and dynamic evolution of the identified congestion centers can provide implications for actional decisions.

Cite as

Weihua Huan, Xintao Liu, and Wei Huang. A Modularity-Driven Framework for Unraveling Congestion Centers with Enhanced Spatial-Semantic Features. In 13th International Conference on Geographic Information Science (GIScience 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 346, pp. 7:1-7:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{huan_et_al:LIPIcs.GIScience.2025.7,
  author =	{Huan, Weihua and Liu, Xintao and Huang, Wei},
  title =	{{A Modularity-Driven Framework for Unraveling Congestion Centers with Enhanced Spatial-Semantic Features}},
  booktitle =	{13th International Conference on Geographic Information Science (GIScience 2025)},
  pages =	{7:1--7:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-378-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{346},
  editor =	{Sila-Nowicka, Katarzyna and Moore, Antoni and O'Sullivan, David and Adams, Benjamin and Gahegan, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.GIScience.2025.7},
  URN =		{urn:nbn:de:0030-drops-238362},
  doi =		{10.4230/LIPIcs.GIScience.2025.7},
  annote =	{Keywords: Congestion center, Temporal granularity, Fast Fourier Transform, Local dominance}
}
Document
Mutational Signature Refitting on Sparse Pan-Cancer Data

Authors: Gal Gilad, Teresa M. Przytycka, and Roded Sharan

Published in: LIPIcs, Volume 344, 25th International Conference on Algorithms for Bioinformatics (WABI 2025)


Abstract
Mutational processes shape cancer genomes, leaving characteristic marks that are termed signatures. The level of activity of each such process, or its signature exposure, provides important information on the disease, improving patient stratification and the prediction of drug response. Thus, there is growing interest in developing refitting methods that decipher those exposures. Previous work in this domain was unsupervised in nature, employing algebraic decomposition and probabilistic inference methods. Here we provide a supervised approach to the problem of signature refitting and show its superiority over current methods. Our method, SuRe, leverages a neural network model to capture correlations between signature exposures in real data. We show that SuRe outperforms previous methods on sparse mutation data from tumor type specific data sets, as well as pan-cancer data sets, with an increasing advantage as the data become sparser. We further demonstrate its utility in clinical settings.

Cite as

Gal Gilad, Teresa M. Przytycka, and Roded Sharan. Mutational Signature Refitting on Sparse Pan-Cancer Data. In 25th International Conference on Algorithms for Bioinformatics (WABI 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 344, pp. 11:1-11:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{gilad_et_al:LIPIcs.WABI.2025.11,
  author =	{Gilad, Gal and Przytycka, Teresa M. and Sharan, Roded},
  title =	{{Mutational Signature Refitting on Sparse Pan-Cancer Data}},
  booktitle =	{25th International Conference on Algorithms for Bioinformatics (WABI 2025)},
  pages =	{11:1--11:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-386-7},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{344},
  editor =	{Brejov\'{a}, Bro\v{n}a and Patro, Rob},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2025.11},
  URN =		{urn:nbn:de:0030-drops-239374},
  doi =		{10.4230/LIPIcs.WABI.2025.11},
  annote =	{Keywords: mutational signatures, signature refitting, cancer genomics, genomic data analysis, somatic mutations}
}
Document
Semantic Representation of Adverbs in the Lexicalized Meaning Representation (LMR) Framework

Authors: Jorge Baptista, Izabela Müller, and Sónia Reis

Published in: OASIcs, Volume 135, 14th Symposium on Languages, Applications and Technologies (SLATE 2025)


Abstract
Semantic parsing serves as a crucial interface between natural language and formal meaning representations, enabling computational systems to capture the underlying semantic structure of linguistic expressions. This paper addresses a relatively understudied area in both linguistic theory and natural language processing: the semantic representation of adverbs. We conduct a comparative analysis of annotation guidelines and practices across two semantic representation frameworks: Lexicalized Meaning Representation (LMR), applied to the European Portuguese edition of the novella "O Principezinho" by Antoine de Saint-Exupéry (1943); and Abstract Meaning Representation (AMR), applied to the Brazilian Portuguese edition, "O Pequeno Príncipe". The study reveals significant limitations in AMR’s handling of adverbial constructions, particularly when assessed against contemporary syntactic-semantic advances in linguistic theory. Furthermore, it highlights the theoretical and practical challenges that LMR continues to face in this domain.

Cite as

Jorge Baptista, Izabela Müller, and Sónia Reis. Semantic Representation of Adverbs in the Lexicalized Meaning Representation (LMR) Framework. In 14th Symposium on Languages, Applications and Technologies (SLATE 2025). Open Access Series in Informatics (OASIcs), Volume 135, pp. 9:1-9:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{baptista_et_al:OASIcs.SLATE.2025.9,
  author =	{Baptista, Jorge and M\"{u}ller, Izabela and Reis, S\'{o}nia},
  title =	{{Semantic Representation of Adverbs in the Lexicalized Meaning Representation (LMR) Framework}},
  booktitle =	{14th Symposium on Languages, Applications and Technologies (SLATE 2025)},
  pages =	{9:1--9:18},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-387-4},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{135},
  editor =	{Baptista, Jorge and Barateiro, Jos\'{e}},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2025.9},
  URN =		{urn:nbn:de:0030-drops-236891},
  doi =		{10.4230/OASIcs.SLATE.2025.9},
  annote =	{Keywords: Semantic representation, Adverbs, Lexicalized Meaning Representation (LMR), Abstract Meaning Representation (AMR), Annotation guidelines, European Portuguese, Brazilian Portuguese, Comparative analysis, The Little Prince, Corpus linguistics, Natural Language Processing (NLP), Multi-word expressions, Syntactic-semantic interface, Linguistic theory}
}
Document
Invited Talk
Vehicle: Bridging the Embedding Gap in the Verification of Neuro-Symbolic Programs (Invited Talk)

Authors: Matthew L. Daggitt, Wen Kokke, Robert Atkey, Ekaterina Komendantskaya, Natalia Slusarz, and Luca Arnaboldi

Published in: LIPIcs, Volume 337, 10th International Conference on Formal Structures for Computation and Deduction (FSCD 2025)


Abstract
Neuro-symbolic programs, i.e. programs containing both machine learning components and traditional symbolic code, are becoming increasingly widespread. Finding a general methodology for verifying such programs is challenging due to both the number of different tools involved and the intricate interface between the "neural" and "symbolic" program components. In this paper we present a general decomposition of the neuro-symbolic verification problem into parts, and examine the problem of the embedding gap that occurs when one tries to combine proofs about the neural and symbolic components. To address this problem we then introduce Vehicle - standing as an abbreviation for a "verification condition language" - an intermediate programming language interface between machine learning frameworks, automated theorem provers, and dependently-typed formalisations of neuro-symbolic programs. Vehicle allows users to specify the properties of the neural components of neuro-symbolic programs once, and then safely compile the specification to each interface using a tailored typing and compilation procedure. We give a high-level overview of Vehicle’s overall design, its interfaces and compilation & type-checking procedures, and then demonstrate its utility by formally verifying the safety of a simple autonomous car controlled by a neural network, operating in a stochastic environment with imperfect information.

Cite as

Matthew L. Daggitt, Wen Kokke, Robert Atkey, Ekaterina Komendantskaya, Natalia Slusarz, and Luca Arnaboldi. Vehicle: Bridging the Embedding Gap in the Verification of Neuro-Symbolic Programs (Invited Talk). In 10th International Conference on Formal Structures for Computation and Deduction (FSCD 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 337, pp. 2:1-2:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{daggitt_et_al:LIPIcs.FSCD.2025.2,
  author =	{Daggitt, Matthew L. and Kokke, Wen and Atkey, Robert and Komendantskaya, Ekaterina and Slusarz, Natalia and Arnaboldi, Luca},
  title =	{{Vehicle: Bridging the Embedding Gap in the Verification of Neuro-Symbolic Programs}},
  booktitle =	{10th International Conference on Formal Structures for Computation and Deduction (FSCD 2025)},
  pages =	{2:1--2:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-374-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{337},
  editor =	{Fern\'{a}ndez, Maribel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSCD.2025.2},
  URN =		{urn:nbn:de:0030-drops-236172},
  doi =		{10.4230/LIPIcs.FSCD.2025.2},
  annote =	{Keywords: Neural Network Verification, Types, Interactive Theorem Provers}
}
Document
Chain of Grounded Objectives: Concise Goal-Oriented Prompting for Code Generation

Authors: Sangyeop Yeo, Seung-Won Hwang, and Yu-Seung Ma

Published in: LIPIcs, Volume 333, 39th European Conference on Object-Oriented Programming (ECOOP 2025)


Abstract
The use of Large Language Models (LLMs) for code generation has gained significant attention in recent years. Existing methods often aim to improve the quality of generated code by incorporating additional contextual information or guidance into input prompts. Many of these approaches adopt process-oriented reasoning strategies, mimicking human-like step-by-step thinking; however, they may not always align with the structured nature of programming languages. This paper introduces Chain of Grounded Objectives (CGO), a concise goal-oriented prompting approach that embeds functional objectives into prompts to enhance code generation. By focusing on precisely defined objectives rather than explicit procedural steps, CGO aligns more naturally with programming tasks while retaining flexibility. Empirical evaluations on HumanEval, MBPP, their extended versions, and LiveCodeBench show that CGO achieves accuracy comparable to or better than existing methods while using fewer tokens, making it a more efficient approach to LLM-based code generation.

Cite as

Sangyeop Yeo, Seung-Won Hwang, and Yu-Seung Ma. Chain of Grounded Objectives: Concise Goal-Oriented Prompting for Code Generation. In 39th European Conference on Object-Oriented Programming (ECOOP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 333, pp. 35:1-35:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{yeo_et_al:LIPIcs.ECOOP.2025.35,
  author =	{Yeo, Sangyeop and Hwang, Seung-Won and Ma, Yu-Seung},
  title =	{{Chain of Grounded Objectives: Concise Goal-Oriented Prompting for Code Generation}},
  booktitle =	{39th European Conference on Object-Oriented Programming (ECOOP 2025)},
  pages =	{35:1--35:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-373-7},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{333},
  editor =	{Aldrich, Jonathan and Silva, Alexandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2025.35},
  URN =		{urn:nbn:de:0030-drops-233271},
  doi =		{10.4230/LIPIcs.ECOOP.2025.35},
  annote =	{Keywords: Artificial Intelligence, Natural Language Processing, Prompt Design, Large Language Models, Code Generation}
}
  • Refine by Type
  • 28 Document/PDF
  • 23 Document/HTML

  • Refine by Publication Year
  • 17 2025
  • 2 2024
  • 8 2023
  • 1 2022

  • Refine by Author
  • 3 Biswas, Russa
  • 3 de Melo, Gerard
  • 2 Chen, Jiaoyan
  • 2 Cochez, Michael
  • 2 Kaffee, Lucie-Aimée
  • Show More...

  • Refine by Series/Journal
  • 10 LIPIcs
  • 4 OASIcs
  • 2 LITES
  • 11 TGDK
  • 1 DagMan

  • Refine by Classification
  • 4 Computing methodologies → Artificial intelligence
  • 4 Computing methodologies → Knowledge representation and reasoning
  • 3 Computing methodologies → Natural language processing
  • 3 Computing methodologies → Semantic networks
  • 2 Computer systems organization → Embedded and cyber-physical systems
  • Show More...

  • Refine by Keyword
  • 4 Knowledge Graphs
  • 4 Large Language Models
  • 2 Explainable AI
  • 2 Knowledge graphs
  • 2 Resilience
  • Show More...

Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail