51 Search Results for "Li, Jing"


Document
Survey
Resilience in Knowledge Graph Embeddings

Authors: Arnab Sharma, N'Dah Jean Kouagou, and Axel-Cyrille Ngonga Ngomo

Published in: TGDK, Volume 3, Issue 2 (2025). Transactions on Graph Data and Knowledge, Volume 3, Issue 2


Abstract
In recent years, knowledge graphs have gained interest and witnessed widespread applications in various domains, such as information retrieval, question-answering, recommendation systems, amongst others. Large-scale knowledge graphs to this end have demonstrated their utility in effectively representing structured knowledge. To further facilitate the application of machine learning techniques, knowledge graph embedding models have been developed. Such models can transform entities and relationships within knowledge graphs into vectors. However, these embedding models often face challenges related to noise, missing information, distribution shift, adversarial attacks, etc. This can lead to sub-optimal embeddings and incorrect inferences, thereby negatively impacting downstream applications. While the existing literature has focused so far on adversarial attacks on KGE models, the challenges related to the other critical aspects remain unexplored. In this paper, we, first of all, give a unified definition of resilience, encompassing several factors such as generalisation, in-distribution generalization, distribution adaption, and robustness. After formalizing these concepts for machine learning in general, we define them in the context of knowledge graphs. To find the gap in the existing works on resilience in the context of knowledge graphs, we perform a systematic survey, taking into account all these aspects mentioned previously. Our survey results show that most of the existing works focus on a specific aspect of resilience, namely robustness. After categorizing such works based on their respective aspects of resilience, we discuss the challenges and future research directions.

Cite as

Arnab Sharma, N'Dah Jean Kouagou, and Axel-Cyrille Ngonga Ngomo. Resilience in Knowledge Graph Embeddings. In Transactions on Graph Data and Knowledge (TGDK), Volume 3, Issue 2, pp. 1:1-1:38, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Article{sharma_et_al:TGDK.3.2.1,
  author =	{Sharma, Arnab and Kouagou, N'Dah Jean and Ngomo, Axel-Cyrille Ngonga},
  title =	{{Resilience in Knowledge Graph Embeddings}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{1:1--1:38},
  ISSN =	{2942-7517},
  year =	{2025},
  volume =	{3},
  number =	{2},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.3.2.1},
  URN =		{urn:nbn:de:0030-drops-248117},
  doi =		{10.4230/TGDK.3.2.1},
  annote =	{Keywords: Knowledge graphs, Resilience, Robustness}
}
Document
Composable Byzantine Agreements with Reorder Attacks

Authors: Jing Chen, Jin Dong, Jichen Li, Xuanzhi Xia, and Wentao Zhou

Published in: LIPIcs, Volume 354, 7th Conference on Advances in Financial Technologies (AFT 2025)


Abstract
Byzantine agreement (BA) is a foundational building block in distributed systems that has been extensively studied for decades. With the growing demand for protocol composition in practice, the security analysis of BA protocols under multi-instance executions has attracted increasing attention. However, most existing adversary models focus solely on party corruption and neglect important threats posed by adversarial manipulations of communication channels in the network. Through channel attacks, messages can be reordered across multiple executions and lead to violations of the protocol’s security guarantees, without the participating parties being corrupted. In this work, we present the first adversary model that combines party corruption and channel attacks. Based on this model, we establish new security thresholds for Byzantine agreement under parallel and concurrent compositions, supported by complementary impossibility and possibility results that match each other to form a tight bound. For the impossibility result, we show that even authenticated Byzantine agreement protocols cannot be secure under parallel composition when n ≤ 3t or n ≤ 2c + 2t + 1, where t and c denote the number of corrupted parties and communication channels, respectively. For the possibility result, we prove the existence of secure protocols for unauthenticated Byzantine agreement under parallel and concurrent composition, when n > 3t and n > 2c+2t+1. More specifically, we provide a general black-box compiler that transforms any single-instance secure BA protocol into one that is secure under parallel executions, and we provide a non-black-box construction for concurrent compositions.

Cite as

Jing Chen, Jin Dong, Jichen Li, Xuanzhi Xia, and Wentao Zhou. Composable Byzantine Agreements with Reorder Attacks. In 7th Conference on Advances in Financial Technologies (AFT 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 354, pp. 13:1-13:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.AFT.2025.13,
  author =	{Chen, Jing and Dong, Jin and Li, Jichen and Xia, Xuanzhi and Zhou, Wentao},
  title =	{{Composable Byzantine Agreements with Reorder Attacks}},
  booktitle =	{7th Conference on Advances in Financial Technologies (AFT 2025)},
  pages =	{13:1--13:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-400-0},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{354},
  editor =	{Avarikioti, Zeta and Christin, Nicolas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2025.13},
  URN =		{urn:nbn:de:0030-drops-247321},
  doi =		{10.4230/LIPIcs.AFT.2025.13},
  annote =	{Keywords: Byzantine agreement, protocol composition, channel reorder attack, security threshold}
}
Document
Efficiency of Learned Indexes on Genome Spectra

Authors: Md. Hasin Abrar, Paul Medvedev, and Giorgio Vinciguerra

Published in: LIPIcs, Volume 351, 33rd Annual European Symposium on Algorithms (ESA 2025)


Abstract
Data structures on a multiset of genomic k-mers are at the heart of many bioinformatic tools. As genomic datasets grow in scale, the efficiency of these data structures increasingly depends on how well they leverage the inherent patterns in the data. One recent and effective approach is the use of learned indexes that approximate the rank function of a multiset using a piecewise linear function with very few segments. However, theoretical worst-case analysis struggles to predict the practical performance of these indexes. We address this limitation by developing a novel measure of piecewise-linear approximability of the data, called CaPLa (Canonical Piecewise Linear approximability). CaPLa builds on the empirical observation that a power-law model often serves as a reasonable proxy for piecewise linear-approximability, while explicitly accounting for deviations from a true power-law fit. We prove basic properties of CaPLa and present an efficient algorithm to compute it. We then demonstrate that CaPLa can accurately predict space bounds for data structures on real data. Empirically, we analyze over 500 genomes through the lens of CaPLa, revealing that it varies widely across the tree of life and even within individual genomes. Finally, we study the robustness of CaPLa as a measure and the factors that make genomic k-mer multisets different from random ones.

Cite as

Md. Hasin Abrar, Paul Medvedev, and Giorgio Vinciguerra. Efficiency of Learned Indexes on Genome Spectra. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 18:1-18:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{abrar_et_al:LIPIcs.ESA.2025.18,
  author =	{Abrar, Md. Hasin and Medvedev, Paul and Vinciguerra, Giorgio},
  title =	{{Efficiency of Learned Indexes on Genome Spectra}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{18:1--18:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.18},
  URN =		{urn:nbn:de:0030-drops-244865},
  doi =		{10.4230/LIPIcs.ESA.2025.18},
  annote =	{Keywords: Genome spectra, piecewise linear approximation, learned index, k-mers}
}
Document
Compact Representation of Semilinear and Terrain-Like Graphs

Authors: Jean Cardinal and Yelena Yuditsky

Published in: LIPIcs, Volume 351, 33rd Annual European Symposium on Algorithms (ESA 2025)


Abstract
We consider the existence and construction of biclique covers of graphs, consisting of coverings of their edge sets by complete bipartite graphs. The size of such a cover is the sum of the sizes of the bicliques. Small-size biclique covers of graphs are ubiquitous in computational geometry, and have been shown to be useful compact representations of graphs. We give a brief survey of classical and recent results on biclique covers and their applications, and give new families of graphs having biclique covers of near-linear size. In particular, we show that semilinear graphs, whose edges are defined by linear relations in bounded dimensional space, always have biclique covers of size O(npolylog n). This generalizes many previously known results on special classes of graphs including interval graphs, permutation graphs, and graphs of bounded boxicity, but also new classes such as intersection graphs of L-shapes in the plane. It also directly implies the bounds for Zarankiewicz’s problem derived by Basit, Chernikov, Starchenko, Tao, and Tran (Forum Math. Sigma, 2021). We also consider capped graphs, also known as terrain-like graphs, defined as ordered graphs forbidding a certain ordered pattern on four vertices. Terrain-like graphs contain the induced subgraphs of terrain visibility graphs. We give an elementary proof that these graphs admit biclique partitions of size O(nlog³ n). This provides a simple combinatorial analogue of a classical result from Agarwal, Alon, Aronov, and Suri on polygon visibility graphs (Discrete Comput. Geom. 1994). Finally, we prove that there exists families of unit disk graphs on n vertices that do not admit biclique coverings of size o(n^{4/3}), showing that we are unlikely to improve on Szemerédi-Trotter type incidence bounds for higher-degree semialgebraic graphs.

Cite as

Jean Cardinal and Yelena Yuditsky. Compact Representation of Semilinear and Terrain-Like Graphs. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 67:1-67:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{cardinal_et_al:LIPIcs.ESA.2025.67,
  author =	{Cardinal, Jean and Yuditsky, Yelena},
  title =	{{Compact Representation of Semilinear and Terrain-Like Graphs}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{67:1--67:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.67},
  URN =		{urn:nbn:de:0030-drops-245359},
  doi =		{10.4230/LIPIcs.ESA.2025.67},
  annote =	{Keywords: Biclique covers, intersection graphs, visibility graphs, Zarankiewicz’s problem}
}
Document
Toward an Earth-Independent System for EVA Mission Planning: Integrating Physical Models, Domain Knowledge, and Agentic RAG to Provide Explainable LLM-Based Decision Support

Authors: Kaisheng Li and Richard S. Whittle

Published in: OASIcs, Volume 130, Advancing Human-Computer Interaction for Space Exploration (SpaceCHI 2025)


Abstract
We propose a unified framework for an Earth‑independent AI system that provides explainable, context‑aware decision support for EVA mission planning by integrating six core components: a fine‑tuned EVA domain LLM, a retrieval‑augmented knowledge base, a short-term memory store, physical simulation models, an agentic orchestration layer, and a multimodal user interface. To ground our design, we analyze the current roles and substitution potential of the Mission Control Center - identifying which procedural and analytical functions can be automated onboard while preserving human oversight for experiential and strategic tasks. Building on this framework, we introduce RASAGE (Retrieval & Simulation Augmented Guidance Agent for Exploration), a proof‑of‑concept toolset that combines Microsoft Phi‑4‑mini‑instruct with a FAISS (Facebook AI Similarity Search)‑powered EVA knowledge base and custom A* path planning and hypogravity metabolic models to generate grounded, traceable EVA plans. We outline a staged validation strategy to evaluate improvements in route efficiency, metabolic prediction accuracy, anomaly response effectiveness, and crew trust under realistic communication delays. Our findings demonstrate the feasibility of replicating key Mission Control functions onboard, enhancing crew autonomy, reducing cognitive load, and improving safety for deep‑space exploration missions.

Cite as

Kaisheng Li and Richard S. Whittle. Toward an Earth-Independent System for EVA Mission Planning: Integrating Physical Models, Domain Knowledge, and Agentic RAG to Provide Explainable LLM-Based Decision Support. In Advancing Human-Computer Interaction for Space Exploration (SpaceCHI 2025). Open Access Series in Informatics (OASIcs), Volume 130, pp. 6:1-6:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{li_et_al:OASIcs.SpaceCHI.2025.6,
  author =	{Li, Kaisheng and Whittle, Richard S.},
  title =	{{Toward an Earth-Independent System for EVA Mission Planning: Integrating Physical Models, Domain Knowledge, and Agentic RAG to Provide Explainable LLM-Based Decision Support}},
  booktitle =	{Advancing Human-Computer Interaction for Space Exploration (SpaceCHI 2025)},
  pages =	{6:1--6:17},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-384-3},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{130},
  editor =	{Bensch, Leonie and Nilsson, Tommy and Nisser, Martin and Pataranutaporn, Pat and Schmidt, Albrecht and Sumini, Valentina},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SpaceCHI.2025.6},
  URN =		{urn:nbn:de:0030-drops-239967},
  doi =		{10.4230/OASIcs.SpaceCHI.2025.6},
  annote =	{Keywords: Human-AI Interaction for Space Exploration, Extravehicular Activities, Cognitive load and Human Performance Issues, Human Systems Exploration, Lunar Exploration, LLM}
}
Document
In-Situ Visual Programming

Authors: Ulrich Brandstätter and Bernhard Schenkenfelder

Published in: OASIcs, Volume 134, Companion Proceedings of the 9th International Conference on the Art, Science, and Engineering of Programming (Programming 2025)


Abstract
Most Visual Programming Environments (VPEs) available today aim to make software development more accessible for specific domains, such as automation, business intelligence, data science, education, or real-time media processing. In their niches, VPEs offer several advantages over traditional text-based programming, including shorter training times, immediate visual feedback, and lower barriers to entry. With this work, we introduce In-Situ Visual Programming (ISVP), a novel programming paradigm to enable users to create, modify, and contribute to software via visual programming in physical contexts. User-created and pre-built programs can be attached to and interlinked with physical objects - in an Augmented Reality (AR) environment. We believe that the spatial and contextual proximity of processing code and physical objects will make software development more intuitive, and we argue this position based on two model use cases.

Cite as

Ulrich Brandstätter and Bernhard Schenkenfelder. In-Situ Visual Programming. In Companion Proceedings of the 9th International Conference on the Art, Science, and Engineering of Programming (Programming 2025). Open Access Series in Informatics (OASIcs), Volume 134, pp. 7:1-7:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{brandstatter_et_al:OASIcs.Programming.2025.7,
  author =	{Brandst\"{a}tter, Ulrich and Schenkenfelder, Bernhard},
  title =	{{In-Situ Visual Programming}},
  booktitle =	{Companion Proceedings of the 9th International Conference on the Art, Science, and Engineering of Programming (Programming 2025)},
  pages =	{7:1--7:11},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-382-9},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{134},
  editor =	{Edwards, Jonathan and Perera, Roly and Petricek, Tomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.Programming.2025.7},
  URN =		{urn:nbn:de:0030-drops-242916},
  doi =		{10.4230/OASIcs.Programming.2025.7},
  annote =	{Keywords: Visual programming, End-user programming, Programming paradigm}
}
Document
RANDOM
Sublinear Space Graph Algorithms in the Continual Release Model

Authors: Alessandro Epasto, Quanquan C. Liu, Tamalika Mukherjee, and Felix Zhou

Published in: LIPIcs, Volume 353, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2025)


Abstract
The graph continual release model of differential privacy seeks to produce differentially private solutions to graph problems under a stream of edge updates where new private solutions are released after each update. Thus far, previously known edge-differentially private algorithms for most graph problems including densest subgraph and matchings in the continual release setting only output real-value estimates (not vertex subset solutions) and do not use sublinear space. Instead, they rely on computing exact graph statistics on the input [Hendrik Fichtenberger et al., 2021; Shuang Song et al., 2018]. In this paper, we leverage sparsification to address the above shortcomings for edge-insertion streams. Our edge-differentially private algorithms use sublinear space with respect to the number of edges in the graph while some also achieve sublinear space in the number of vertices in the graph. In addition, for the densest subgraph problem, we also output edge-differentially private vertex subset solutions; no previous graph algorithms in the continual release model output such subsets. We make novel use of assorted sparsification techniques from the non-private streaming and static graph algorithms literature to achieve new results in the sublinear space, continual release setting. This includes algorithms for densest subgraph, maximum matching, as well as the first continual release k-core decomposition algorithm. We also develop a novel sparse level data structure for k-core decomposition that may be of independent interest. To complement our insertion-only algorithms, we conclude with polynomial additive error lower bounds for edge-privacy in the fully dynamic setting, where only logarithmic lower bounds were previously known.

Cite as

Alessandro Epasto, Quanquan C. Liu, Tamalika Mukherjee, and Felix Zhou. Sublinear Space Graph Algorithms in the Continual Release Model. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 353, pp. 40:1-40:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{epasto_et_al:LIPIcs.APPROX/RANDOM.2025.40,
  author =	{Epasto, Alessandro and Liu, Quanquan C. and Mukherjee, Tamalika and Zhou, Felix},
  title =	{{Sublinear Space Graph Algorithms in the Continual Release Model}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2025)},
  pages =	{40:1--40:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-397-3},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{353},
  editor =	{Ene, Alina and Chattopadhyay, Eshan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2025.40},
  URN =		{urn:nbn:de:0030-drops-244064},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2025.40},
  annote =	{Keywords: Differential Privacy, Continual Release, Densest Subgraph, k-Core Decomposition, Maximum Matching}
}
Document
Large Multi-Modal Model Cartographic Map Comprehension for Textual Locality Georeferencing

Authors: Kalana Wijegunarathna, Kristin Stock, and Christopher B. Jones

Published in: LIPIcs, Volume 346, 13th International Conference on Geographic Information Science (GIScience 2025)


Abstract
Millions of biological sample records collected in the last few centuries archived in natural history collections are un-georeferenced. Georeferencing complex locality descriptions associated with these collection samples is a highly labour-intensive task collection agencies struggle with. None of the existing automated methods exploit maps that are an essential tool for georeferencing complex relations. We present preliminary experiments and results of a novel method that exploits multi-modal capabilities of recent Large Multi-Modal Models (LMM). This method enables the model to visually contextualize spatial relations it reads in the locality description. We use a grid-based approach to adapt these auto-regressive models for this task in a zero-shot setting. Our experiments conducted on a small manually annotated dataset show impressive results for our approach (∼1 km Average distance error) compared to uni-modal georeferencing with Large Language Models and existing georeferencing tools. The paper also discusses the findings of the experiments in light of an LMM’s ability to comprehend fine-grained maps. Motivated by these results, a practical framework is proposed to integrate this method into a georeferencing workflow.

Cite as

Kalana Wijegunarathna, Kristin Stock, and Christopher B. Jones. Large Multi-Modal Model Cartographic Map Comprehension for Textual Locality Georeferencing. In 13th International Conference on Geographic Information Science (GIScience 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 346, pp. 12:1-12:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{wijegunarathna_et_al:LIPIcs.GIScience.2025.12,
  author =	{Wijegunarathna, Kalana and Stock, Kristin and Jones, Christopher B.},
  title =	{{Large Multi-Modal Model Cartographic Map Comprehension for Textual Locality Georeferencing}},
  booktitle =	{13th International Conference on Geographic Information Science (GIScience 2025)},
  pages =	{12:1--12:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-378-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{346},
  editor =	{Sila-Nowicka, Katarzyna and Moore, Antoni and O'Sullivan, David and Adams, Benjamin and Gahegan, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.GIScience.2025.12},
  URN =		{urn:nbn:de:0030-drops-238412},
  doi =		{10.4230/LIPIcs.GIScience.2025.12},
  annote =	{Keywords: Large Multi-Modal Models, Large Language Models, LLM, Georeferencing, Natural History collections}
}
Document
Precomputed Topological Relations for Integrated Geospatial Analysis Across Knowledge Graphs

Authors: Katrina Schweikert, David K. Kedrowski, Shirly Stephen, and Torsten Hahmann

Published in: LIPIcs, Volume 346, 13th International Conference on Geographic Information Science (GIScience 2025)


Abstract
Geospatial Knowledge Graphs (GeoKGs) represent a significant advancement in the integration of AI-driven geographic information, facilitating interoperable and semantically rich geospatial analytics across various domains. This paper explores the use of topologically enriched GeoKGs, built on an explicit representation of S2 Geometry alongside precomputed topological relations, for constructing efficient geospatial analysis workflows within and across knowledge graphs (KGs). Using the SAWGraph knowledge graph as a case study focused on enviromental contamination by PFAS, we demonstrate how this framework supports fundamental GIS operations - such as spatial filtering, proximity analysis, overlay operations and network analysis - in a GeoKG setting while allowing for the easy linking of these operations with one another and with semantic filters. This enables the efficient execution of complex geospatial analyses as semantically-explicit queries and enhances the usability of geospatial data across graphs. Additionally, the framework eliminates the need for explicit support for GeoSPARQL’s topological operations in the utilized graph databases and better integrates spatial knowledge into the overall semantic inference process supported by RDFS and OWL ontologies.

Cite as

Katrina Schweikert, David K. Kedrowski, Shirly Stephen, and Torsten Hahmann. Precomputed Topological Relations for Integrated Geospatial Analysis Across Knowledge Graphs. In 13th International Conference on Geographic Information Science (GIScience 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 346, pp. 4:1-4:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{schweikert_et_al:LIPIcs.GIScience.2025.4,
  author =	{Schweikert, Katrina and Kedrowski, David K. and Stephen, Shirly and Hahmann, Torsten},
  title =	{{Precomputed Topological Relations for Integrated Geospatial Analysis Across Knowledge Graphs}},
  booktitle =	{13th International Conference on Geographic Information Science (GIScience 2025)},
  pages =	{4:1--4:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-378-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{346},
  editor =	{Sila-Nowicka, Katarzyna and Moore, Antoni and O'Sullivan, David and Adams, Benjamin and Gahegan, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.GIScience.2025.4},
  URN =		{urn:nbn:de:0030-drops-238332},
  doi =		{10.4230/LIPIcs.GIScience.2025.4},
  annote =	{Keywords: knowledge graph, GeoKG, spatial analysis, ontology, SPARQL, GeoSPARQL, discrete global grid system, S2 geometry, GeoAI, PFAS}
}
Document
From Prediction to Action: A Constraint-Based Approach to Predictive Policing

Authors: Younes Mechqrane and Ismail Elabbassi

Published in: LIPIcs, Volume 340, 31st International Conference on Principles and Practice of Constraint Programming (CP 2025)


Abstract
Crime prevention in urban environments demands both accurate crime forecasting and the efficient deployment of limited law enforcement resources. In this paper, we present an integrated framework that combines a machine learning module (i.e. PredRNN++ [Wang et al., 2018]) for spatiotemporal crime prediction with a constraint programming module for patrol route optimization. Our approach operates within the ICON loop framework [Bessiere et al., 2017], facilitating iterative refinement of predictions and immediate adaptation of patrol strategies. We validate our method using the City of Chicago Crime Dataset. Experimental results show that routes informed by crime predictions significantly outperform strategies relying solely on historical patterns or operational constraints. These findings illustrate how coupling predictive analytics with constraint programming can substantially enhance resource allocation and overall crime deterrence.

Cite as

Younes Mechqrane and Ismail Elabbassi. From Prediction to Action: A Constraint-Based Approach to Predictive Policing. In 31st International Conference on Principles and Practice of Constraint Programming (CP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 340, pp. 29:1-29:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{mechqrane_et_al:LIPIcs.CP.2025.29,
  author =	{Mechqrane, Younes and Elabbassi, Ismail},
  title =	{{From Prediction to Action: A Constraint-Based Approach to Predictive Policing}},
  booktitle =	{31st International Conference on Principles and Practice of Constraint Programming (CP 2025)},
  pages =	{29:1--29:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-380-5},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{340},
  editor =	{de la Banda, Maria Garcia},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2025.29},
  URN =		{urn:nbn:de:0030-drops-238902},
  doi =		{10.4230/LIPIcs.CP.2025.29},
  annote =	{Keywords: Inductive Constraint Programming (ICON) Loop, Next Frame Prediction, PredRNN++}
}
Document
Solving the Agile Earth Observation Satellite Scheduling Problem with CP and Local Search

Authors: Valentin Antuori, Damien T. Wojtowicz, and Emmanuel Hebrard

Published in: LIPIcs, Volume 340, 31st International Conference on Principles and Practice of Constraint Programming (CP 2025)


Abstract
The increasing hunger for remote sensing data fuels a boom in satellite imagery, leading to larger agile Earth observation satellite (AEOS) constellations. Therefore, instances of the AEOS scheduling problem (AEOSSP) has become harder to solve. As most existing approaches to solve AEOSSP are designed for a single spacecraft or smaller constellations in mind, they are not tailored to the need of our industrial partner that is about to launch a constellation of 20 AEOSs. Hence, we designed a local search solver able to schedule observations and downloads at such a scale. It relies on solving a series of sub-problems as travelling salesman problem with time windows (TSPTW), first greedily, then using a CP-SAT exact solver in order to find a solution when the greedy insertion fails. Lastly, it schedules downloads and enforces memory constraints with greedy algorithms. Experiments were carried out on instances from the literature as well as generated instances from a simulator we designed. Our experiments show that using CP to solve the sub-problem significantly improve the solutions, and overall our method is slightly better than state-of-the-art approaches.

Cite as

Valentin Antuori, Damien T. Wojtowicz, and Emmanuel Hebrard. Solving the Agile Earth Observation Satellite Scheduling Problem with CP and Local Search. In 31st International Conference on Principles and Practice of Constraint Programming (CP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 340, pp. 3:1-3:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{antuori_et_al:LIPIcs.CP.2025.3,
  author =	{Antuori, Valentin and Wojtowicz, Damien T. and Hebrard, Emmanuel},
  title =	{{Solving the Agile Earth Observation Satellite Scheduling Problem with CP and Local Search}},
  booktitle =	{31st International Conference on Principles and Practice of Constraint Programming (CP 2025)},
  pages =	{3:1--3:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-380-5},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{340},
  editor =	{de la Banda, Maria Garcia},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2025.3},
  URN =		{urn:nbn:de:0030-drops-238647},
  doi =		{10.4230/LIPIcs.CP.2025.3},
  annote =	{Keywords: Local Search, Greedy Algorithms, Aerospace Applications}
}
Document
Learning to Bound for Maximum Common Subgraph Algorithms

Authors: Buddhi W. Kothalawala, Henning Koehler, and Qing Wang

Published in: LIPIcs, Volume 340, 31st International Conference on Principles and Practice of Constraint Programming (CP 2025)


Abstract
Identifying the maximum common subgraph between two graphs is a computationally challenging NP-hard problem. While the McSplit algorithm represents a state-of-the-art approach within a branch-and-bound (BnB) framework, several extensions have been proposed to enhance its vertex pair selection strategy, often utilizing reinforcement learning techniques. Nonetheless, the quality of the upper bound remains a critical factor in accelerating the search process by effectively pruning unpromising branches. This research introduces a novel, more restrictive upper bound derived from a detailed analysis of the McSplit algorithm’s generated partitions. To enhance the effectiveness of this bound, we propose a reinforcement learning approach that strategically directs computational effort towards the most promising regions within the search space.

Cite as

Buddhi W. Kothalawala, Henning Koehler, and Qing Wang. Learning to Bound for Maximum Common Subgraph Algorithms. In 31st International Conference on Principles and Practice of Constraint Programming (CP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 340, pp. 22:1-22:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{kothalawala_et_al:LIPIcs.CP.2025.22,
  author =	{Kothalawala, Buddhi W. and Koehler, Henning and Wang, Qing},
  title =	{{Learning to Bound for Maximum Common Subgraph Algorithms}},
  booktitle =	{31st International Conference on Principles and Practice of Constraint Programming (CP 2025)},
  pages =	{22:1--22:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-380-5},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{340},
  editor =	{de la Banda, Maria Garcia},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2025.22},
  URN =		{urn:nbn:de:0030-drops-238837},
  doi =		{10.4230/LIPIcs.CP.2025.22},
  annote =	{Keywords: Combinatorial Search, Branch and Bound, Graph Theory}
}
Document
Phasing Data from Genotype Queries via the μ-PBWT

Authors: Davide Cozzi, Paola Bonizzoni, Christina Boucher, Ben Langmead, and Yuri Pirola

Published in: OASIcs, Volume 131, The Expanding World of Compressed Data: A Festschrift for Giovanni Manzini's 60th Birthday (2025)


Abstract
Genotype phasing - the process of reconstructing haplotypes from genotype data - is a fundamental problem in genomics with applications in ancestry inference, imputation, and disease association. Traditional phasing methods rely on statistical models or combinatorial approaches which can be computationally expensive, particularly when applied to large-scale reference panels. In this paper, we present a first exploration of using the μ-PBWT (a run-length encoded Positional Burrows-Wheeler Transform) to solve the genotype phasing problem with a reference panel. Leveraging our previous results on positional substrings, we propose an approach that can explain a query genotype if the corresponding haplotype pair exists in the input panel. Moreover, our method is extended to cases where such a pair does not exist, even though some regions should remain unphased if they cannot be explicitly explained using the reference panel. We implemented this method and compared it against Beagle, a state-of-the-art phasing tool, demonstrating that, in the absence of mutations and recombinations, our approach correctly identifies the haplotype pair that explains a genotype query while using seven times less memory than Beagle. However, we also observe that as mutation rates increase, the quality of the phasing decreases as a result of the growing difficulty of identifying consistent haplotype pairs in the presence of sequence variation. These findings highlight the potential of μ-PBWT as an efficient alternative for genotype phasing, particularly in settings where computational resources are limited. The source code is publicly available at https://github.com/dlcgold/muPBWT/tree/phase.

Cite as

Davide Cozzi, Paola Bonizzoni, Christina Boucher, Ben Langmead, and Yuri Pirola. Phasing Data from Genotype Queries via the μ-PBWT. In The Expanding World of Compressed Data: A Festschrift for Giovanni Manzini's 60th Birthday. Open Access Series in Informatics (OASIcs), Volume 131, pp. 10:1-10:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{cozzi_et_al:OASIcs.Manzini.10,
  author =	{Cozzi, Davide and Bonizzoni, Paola and Boucher, Christina and Langmead, Ben and Pirola, Yuri},
  title =	{{Phasing Data from Genotype Queries via the \mu-PBWT}},
  booktitle =	{The Expanding World of Compressed Data: A Festschrift for Giovanni Manzini's 60th Birthday},
  pages =	{10:1--10:17},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-390-4},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{131},
  editor =	{Ferragina, Paolo and Gagie, Travis and Navarro, Gonzalo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.Manzini.10},
  URN =		{urn:nbn:de:0030-drops-239183},
  doi =		{10.4230/OASIcs.Manzini.10},
  annote =	{Keywords: Positional Burrows-Wheeler Transform, r-index, minimal position substring cover, set-maximal exact matches, genotype phasing}
}
Document
Bridging Language Models and Symbolic Solvers via the Model Context Protocol

Authors: Stefan Szeider

Published in: LIPIcs, Volume 341, 28th International Conference on Theory and Applications of Satisfiability Testing (SAT 2025)


Abstract
This paper presents the MCP Solver, a system that bridges large language models with symbolic solvers through the Model Context Protocol (MCP). The system includes a server and a client component. The server provides an interface to constraint programming (via MiniZinc Python), propositional satisfiability and maximum satisfiability (both via PySAT), and SAT modulo Theories (via Python Z3). The client contains an agent that connects to the server via MCP and uses a language model to autonomously translate problem statements (given in English) into encodings through an incremental editing process and runs the solver. Our experiments demonstrate that this neurosymbolic integration effectively combines the natural language understanding of language models with robust solving capabilities across multiple solving paradigms.

Cite as

Stefan Szeider. Bridging Language Models and Symbolic Solvers via the Model Context Protocol. In 28th International Conference on Theory and Applications of Satisfiability Testing (SAT 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 341, pp. 30:1-30:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{szeider:LIPIcs.SAT.2025.30,
  author =	{Szeider, Stefan},
  title =	{{Bridging Language Models and Symbolic Solvers via the Model Context Protocol}},
  booktitle =	{28th International Conference on Theory and Applications of Satisfiability Testing (SAT 2025)},
  pages =	{30:1--30:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-381-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{341},
  editor =	{Berg, Jeremias and Nordstr\"{o}m, Jakob},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SAT.2025.30},
  URN =		{urn:nbn:de:0030-drops-237649},
  doi =		{10.4230/LIPIcs.SAT.2025.30},
  annote =	{Keywords: Large Language Models, Agents, Constraint Programming, Satisfiability Solvers, Maximum Satisfiability, SAT Modulo Theories, Model Context Protocol}
}
Document
Bit Packed Encodings for Grammar-Compressed Strings Supporting Fast Random Access

Authors: Alan M. Cleary, Joseph Winjum, Jordan Dood, Hiroki Shibata, and Shunsuke Inenaga

Published in: LIPIcs, Volume 338, 23rd International Symposium on Experimental Algorithms (SEA 2025)


Abstract
Grammar-based compression is a powerful compression technique that allows for computation over the compressed data. While there has been extensive theoretical work on grammar and encoding size, there has been little work on practical grammar encodings. In this work, we consider the canonical array-of-arrays grammar representation and present a general bit packing approach for reducing its space requirements in practice. We then present three bit packing strategies based on this approach - one online and two offline - with different space-time trade-offs. This technique can be used to encode any grammar-compressed string while preserving the virtues of the array-of-arrays representation. We show that our encodings are Nlog₂ N away from the information-theoretic bound, where N is the number of symbols in the grammar, and that they are much smaller than methods that meet the information-theoretic bound in practice. Moreover, our experiments show that by using bit packed encodings we can achieve state-of-the-art performance both in grammar encoding size and run-time performance of random-access queries.

Cite as

Alan M. Cleary, Joseph Winjum, Jordan Dood, Hiroki Shibata, and Shunsuke Inenaga. Bit Packed Encodings for Grammar-Compressed Strings Supporting Fast Random Access. In 23rd International Symposium on Experimental Algorithms (SEA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 338, pp. 12:1-12:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{cleary_et_al:LIPIcs.SEA.2025.12,
  author =	{Cleary, Alan M. and Winjum, Joseph and Dood, Jordan and Shibata, Hiroki and Inenaga, Shunsuke},
  title =	{{Bit Packed Encodings for Grammar-Compressed Strings Supporting Fast Random Access}},
  booktitle =	{23rd International Symposium on Experimental Algorithms (SEA 2025)},
  pages =	{12:1--12:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-375-1},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{338},
  editor =	{Mutzel, Petra and Prezza, Nicola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2025.12},
  URN =		{urn:nbn:de:0030-drops-232506},
  doi =		{10.4230/LIPIcs.SEA.2025.12},
  annote =	{Keywords: String algorithms, data compression, random access, grammar-based compression}
}
  • Refine by Type
  • 51 Document/PDF
  • 39 Document/HTML

  • Refine by Publication Year
  • 34 2025
  • 2 2024
  • 9 2023
  • 1 2022
  • 1 2020
  • Show More...

  • Refine by Author
  • 5 Li, Jing
  • 4 Agrawal, Kunal
  • 4 Baruah, Sanjoy
  • 3 Biswas, Russa
  • 3 Chen, Jing
  • Show More...

  • Refine by Series/Journal
  • 30 LIPIcs
  • 6 OASIcs
  • 1 DARTS
  • 3 LITES
  • 11 TGDK

  • Refine by Classification
  • 8 Computer systems organization → Real-time systems
  • 5 Computing methodologies → Knowledge representation and reasoning
  • 4 Computing methodologies → Artificial intelligence
  • 3 Computer systems organization → Embedded and cyber-physical systems
  • 3 Computer systems organization → Embedded software
  • Show More...

  • Refine by Keyword
  • 4 Large Language Models
  • 3 Knowledge graphs
  • 2 Classification
  • 2 Deep Learning
  • 2 Differential Privacy
  • Show More...

Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail