36 Search Results for "Lin, Wei-Kai"


Document
Conversational Agents: A Framework for Evaluation (CAFE) (Dagstuhl Perspectives Workshop 24352)

Authors: Christine Bauer, Li Chen, Nicola Ferro, Norbert Fuhr, Avishek Anand, Timo Breuer, Guglielmo Faggioli, Ophir Frieder, Hideo Joho, Jussi Karlgren, Johannes Kiesel, Bart P. Knijnenburg, Aldo Lipani, Lien Michiels, Andrea Papenmeier, Maria Soledad Pera, Mark Sanderson, Scott Sanner, Benno Stein, Johanne R. Trippas, Karin Verspoor, and Martijn C. Willemsen

Published in: Dagstuhl Manifestos, Volume 11, Issue 1 (2025)


Abstract
During the workshop, we deeply discussed what CONversational Information ACcess (CONIAC) is and its unique features, proposing a world model abstracting it, and defined the Conversational Agents Framework for Evaluation (CAFE) for the evaluation of CONIAC systems, consisting of six major components: 1) goals of the system’s stakeholders, 2) user tasks to be studied in the evaluation, 3) aspects of the users carrying out the tasks, 4) evaluation criteria to be considered, 5) evaluation methodology to be applied, and 6) measures for the quantitative criteria chosen.

Cite as

Christine Bauer, Li Chen, Nicola Ferro, Norbert Fuhr, Avishek Anand, Timo Breuer, Guglielmo Faggioli, Ophir Frieder, Hideo Joho, Jussi Karlgren, Johannes Kiesel, Bart P. Knijnenburg, Aldo Lipani, Lien Michiels, Andrea Papenmeier, Maria Soledad Pera, Mark Sanderson, Scott Sanner, Benno Stein, Johanne R. Trippas, Karin Verspoor, and Martijn C. Willemsen. Conversational Agents: A Framework for Evaluation (CAFE) (Dagstuhl Perspectives Workshop 24352). In Dagstuhl Manifestos, Volume 11, Issue 1, pp. 19-67, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Article{bauer_et_al:DagMan.11.1.19,
  author =	{Bauer, Christine and Chen, Li and Ferro, Nicola and Fuhr, Norbert and Anand, Avishek and Breuer, Timo and Faggioli, Guglielmo and Frieder, Ophir and Joho, Hideo and Karlgren, Jussi and Kiesel, Johannes and Knijnenburg, Bart P. and Lipani, Aldo and Michiels, Lien and Papenmeier, Andrea and Pera, Maria Soledad and Sanderson, Mark and Sanner, Scott and Stein, Benno and Trippas, Johanne R. and Verspoor, Karin and Willemsen, Martijn C.},
  title =	{{Conversational Agents: A Framework for Evaluation (CAFE) (Dagstuhl Perspectives Workshop 24352)}},
  pages =	{19--67},
  journal =	{Dagstuhl Manifestos},
  ISSN =	{2193-2433},
  year =	{2025},
  volume =	{11},
  number =	{1},
  editor =	{Bauer, Christine and Chen, Li and Ferro, Nicola and Fuhr, Norbert and Anand, Avishek and Breuer, Timo and Faggioli, Guglielmo and Frieder, Ophir and Joho, Hideo and Karlgren, Jussi and Kiesel, Johannes and Knijnenburg, Bart P. and Lipani, Aldo and Michiels, Lien and Papenmeier, Andrea and Pera, Maria Soledad and Sanderson, Mark and Sanner, Scott and Stein, Benno and Trippas, Johanne R. and Verspoor, Karin and Willemsen, Martijn C.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagMan.11.1.19},
  URN =		{urn:nbn:de:0030-drops-252722},
  doi =		{10.4230/DagMan.11.1.19},
  annote =	{Keywords: Conversational Agents, Evaluation, Information Access}
}
Document
Research
GraphRAG on Technical Documents - Impact of Knowledge Graph Schema

Authors: Henri Scaffidi, Melinda Hodkiewicz, Caitlin Woods, and Nicole Roocke

Published in: TGDK, Volume 3, Issue 2 (2025). Transactions on Graph Data and Knowledge, Volume 3, Issue 2


Abstract
Retrieval Augmented Generation (RAG) is seeing rapid adoption in industry to enable employees to query information captured in proprietary data for their organisation. In this work, we test the impact of domain-relevant knowledge graph schemas on the results of Microsoft’s GraphRAG pipeline. Our approach aims to address the poor quality of GraphRAG responses on technical reports rich in domain-specific terms. The use case involves technical reports about geology, chemistry and mineral processing published by the Minerals Research Institute of Western Australia (MRIWA). Four schemas are considered: a simple five-class minerals domain expert-developed schema, an expanded minerals domain schema, the Microsoft GraphRAG auto-generated schema, and a schema-less GraphRAG. These are compared to a conventional baseline RAG. Performance is evaluated using a scoring approach that accounts for the mix of correct, incorrect, additional, and missing content in RAG responses. The results show that the simple five-class minerals domain schema extracts approximately 10% more entities from the MRIWA reports than the other schema options. Additionally, both the five-class and the expanded eight-class minerals domain schemas produce the most factually correct answers and the fewest hallucinations. We attribute this to the minerals-specific schemas extracting more relevant, domain-specific information during the Indexing stage. As a result, the Query stage’s context window includes more high-value content. This contributes to the observed improvement in answer quality compared to the other pipelines. In contrast, pipelines with fewer domain-related entities in the KG retrieve less valuable information, leaving more room for irrelevant content in the context window. Baseline RAG responses were typically shorter, less complete, and contained more hallucinations compared to our GraphRAG pipelines. We provide a complete set of resources at https://github.com/nlp-tlp/GraphRAG-on-Minerals-Domain/tree/main. These resources include links to the MRIWA reports, a set of questions (from simple to challenging) along with domain-expert curated answers, schemas, and evaluations of the pipelines.

Cite as

Henri Scaffidi, Melinda Hodkiewicz, Caitlin Woods, and Nicole Roocke. GraphRAG on Technical Documents - Impact of Knowledge Graph Schema. In Transactions on Graph Data and Knowledge (TGDK), Volume 3, Issue 2, pp. 3:1-3:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Article{scaffidi_et_al:TGDK.3.2.3,
  author =	{Scaffidi, Henri and Hodkiewicz, Melinda and Woods, Caitlin and Roocke, Nicole},
  title =	{{GraphRAG on Technical Documents - Impact of Knowledge Graph Schema}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{3:1--3:24},
  ISSN =	{2942-7517},
  year =	{2025},
  volume =	{3},
  number =	{2},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.3.2.3},
  URN =		{urn:nbn:de:0030-drops-248131},
  doi =		{10.4230/TGDK.3.2.3},
  annote =	{Keywords: RAG, minerals, local search, global search, entity extraction, competency questions}
}
Document
Integrating Human-In-The-Loop AI to Tackle Space Communication Delay Challenges

Authors: Nikos Mavrakis, Effie Lai-Chong Law, and Hubert P. H. Shum

Published in: OASIcs, Volume 130, Advancing Human-Computer Interaction for Space Exploration (SpaceCHI 2025)


Abstract
Deep space missions face significant communication delays that disrupt both operational workflows and psychological support for crew members. Unlike low Earth orbit operations, delays ranging from several minutes to nearly an hour make real-time communication with mission control infeasible, forcing crews to act with greater independence under uncertain conditions. This position paper examines how human-in-the-loop AI, digital twins, and edge AI can be integrated to mitigate these delays while maintaining astronaut autonomy and engagement. We argue that human-in-the-loop AI enables decision-making processes that are responsive to local context while remaining adaptable to changing mission demands. Digital twins offer real-time simulation and predictive modelling capabilities, allowing astronauts to explore options and troubleshoot without waiting for ground input. Edge AI brings computation closer to data sources, enabling low-latency inference onboard spacecraft for time-critical decisions. These ideas are explored through two use cases: using deepfakes to support emotionally resonant communication with loved ones, and applying visual-language models for onboard fault diagnosis and adaptive task replanning. We conclude with reflections on system design challenges under constrained and high-stakes conditions.

Cite as

Nikos Mavrakis, Effie Lai-Chong Law, and Hubert P. H. Shum. Integrating Human-In-The-Loop AI to Tackle Space Communication Delay Challenges. In Advancing Human-Computer Interaction for Space Exploration (SpaceCHI 2025). Open Access Series in Informatics (OASIcs), Volume 130, pp. 15:1-15:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{mavrakis_et_al:OASIcs.SpaceCHI.2025.15,
  author =	{Mavrakis, Nikos and Law, Effie Lai-Chong and Shum, Hubert P. H.},
  title =	{{Integrating Human-In-The-Loop AI to Tackle Space Communication Delay Challenges}},
  booktitle =	{Advancing Human-Computer Interaction for Space Exploration (SpaceCHI 2025)},
  pages =	{15:1--15:16},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-384-3},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{130},
  editor =	{Bensch, Leonie and Nilsson, Tommy and Nisser, Martin and Pataranutaporn, Pat and Schmidt, Albrecht and Sumini, Valentina},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SpaceCHI.2025.15},
  URN =		{urn:nbn:de:0030-drops-240051},
  doi =		{10.4230/OASIcs.SpaceCHI.2025.15},
  annote =	{Keywords: Human-in-the-loop AI, communication delays, human spaceflight}
}
Document
Toward an Earth-Independent System for EVA Mission Planning: Integrating Physical Models, Domain Knowledge, and Agentic RAG to Provide Explainable LLM-Based Decision Support

Authors: Kaisheng Li and Richard S. Whittle

Published in: OASIcs, Volume 130, Advancing Human-Computer Interaction for Space Exploration (SpaceCHI 2025)


Abstract
We propose a unified framework for an Earth‑independent AI system that provides explainable, context‑aware decision support for EVA mission planning by integrating six core components: a fine‑tuned EVA domain LLM, a retrieval‑augmented knowledge base, a short-term memory store, physical simulation models, an agentic orchestration layer, and a multimodal user interface. To ground our design, we analyze the current roles and substitution potential of the Mission Control Center - identifying which procedural and analytical functions can be automated onboard while preserving human oversight for experiential and strategic tasks. Building on this framework, we introduce RASAGE (Retrieval & Simulation Augmented Guidance Agent for Exploration), a proof‑of‑concept toolset that combines Microsoft Phi‑4‑mini‑instruct with a FAISS (Facebook AI Similarity Search)‑powered EVA knowledge base and custom A* path planning and hypogravity metabolic models to generate grounded, traceable EVA plans. We outline a staged validation strategy to evaluate improvements in route efficiency, metabolic prediction accuracy, anomaly response effectiveness, and crew trust under realistic communication delays. Our findings demonstrate the feasibility of replicating key Mission Control functions onboard, enhancing crew autonomy, reducing cognitive load, and improving safety for deep‑space exploration missions.

Cite as

Kaisheng Li and Richard S. Whittle. Toward an Earth-Independent System for EVA Mission Planning: Integrating Physical Models, Domain Knowledge, and Agentic RAG to Provide Explainable LLM-Based Decision Support. In Advancing Human-Computer Interaction for Space Exploration (SpaceCHI 2025). Open Access Series in Informatics (OASIcs), Volume 130, pp. 6:1-6:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{li_et_al:OASIcs.SpaceCHI.2025.6,
  author =	{Li, Kaisheng and Whittle, Richard S.},
  title =	{{Toward an Earth-Independent System for EVA Mission Planning: Integrating Physical Models, Domain Knowledge, and Agentic RAG to Provide Explainable LLM-Based Decision Support}},
  booktitle =	{Advancing Human-Computer Interaction for Space Exploration (SpaceCHI 2025)},
  pages =	{6:1--6:17},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-384-3},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{130},
  editor =	{Bensch, Leonie and Nilsson, Tommy and Nisser, Martin and Pataranutaporn, Pat and Schmidt, Albrecht and Sumini, Valentina},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SpaceCHI.2025.6},
  URN =		{urn:nbn:de:0030-drops-239967},
  doi =		{10.4230/OASIcs.SpaceCHI.2025.6},
  annote =	{Keywords: Human-AI Interaction for Space Exploration, Extravehicular Activities, Cognitive load and Human Performance Issues, Human Systems Exploration, Lunar Exploration, LLM}
}
Document
MetaDORAM: Info-Theoretic Distributed ORAM with Less Communication

Authors: Brett Hemenway Falk, Daniel Noble, and Rafail Ostrovsky

Published in: LIPIcs, Volume 343, 6th Conference on Information-Theoretic Cryptography (ITC 2025)


Abstract
A Distributed Oblivious RAM is a multi-party protocol that securely implements a RAM functionality on secret-shared inputs and outputs. This paper presents two information-theoretically secure DORAMs whose communication costs are asymptotic improvements over the state of the art. Let n be the number of memory locations and let d be the bit-length of each location. The first, MetaDORAM1, is statistically secure, with n^{-ω(1)} leakage. It has amortized O(log_b(n) d + b ω(1) log(n) + log³(n)/log(log(n))) bits of communication per memory access. Here, b ≥ 2 is a free parameter and ω(1) is any super-constant function (in n). The most communication-efficient prior statistically secure DORAM was that of Abraham et al (PKC 2017), which has cost O(log_b(n) d + b ω(1) log_b(n) log²(n)). MetaDORAM1 is a Θ(ω(1) log(log(n)))-factor improvement over the work of Abraham et al whenever d = O(log²(n)). The second protocol, MetaDORAM2, achieves perfect security. It has amortized communication cost O(log_b(n)d + b log(n) + log³(n)/log(log(n))) where, again, b ≥ 2 is a free parameter. The best prior perfectly secure DORAM is that of Chan et al (ASIACRYPT 2018) which has communication cost O(log(n) d + log³(n)). MetaDORAM2 is therefore a Ω(log(log(n)))-factor improvement over the DORAM of Chan et al under any parameter range (by setting b = log(n)) and is a Θ(log(n))-factor improvement for d = Ω(n^ε) for any constant ε > 0 (by setting b = d/log(n)). Our work is the first perfectly secure DORAM with sub-logarithmic communication overhead. MetaDORAM2 comes at the cost of a once-off (for any given n) setup phase which requires exponential (in n) computation. Both DORAMs are in the 3-party setting with security against 1 semi-honest, static corruption. By a trivial transformation, these can be transformed, respectively, into statistically and perfectly secure active 3-server ORAM protocols secure against 1 corrupt server, with the same communication costs. These multi-server ORAM protocols are likewise asymptotic improvements over the state of the art.

Cite as

Brett Hemenway Falk, Daniel Noble, and Rafail Ostrovsky. MetaDORAM: Info-Theoretic Distributed ORAM with Less Communication. In 6th Conference on Information-Theoretic Cryptography (ITC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 343, pp. 6:1-6:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{falk_et_al:LIPIcs.ITC.2025.6,
  author =	{Falk, Brett Hemenway and Noble, Daniel and Ostrovsky, Rafail},
  title =	{{MetaDORAM: Info-Theoretic Distributed ORAM with Less Communication}},
  booktitle =	{6th Conference on Information-Theoretic Cryptography (ITC 2025)},
  pages =	{6:1--6:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-385-0},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{343},
  editor =	{Gilboa, Niv},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITC.2025.6},
  URN =		{urn:nbn:de:0030-drops-243560},
  doi =		{10.4230/LIPIcs.ITC.2025.6},
  annote =	{Keywords: ORAM, MPC, DORAM, multi-server ORAM, active ORAM}
}
Document
DiVerG: Scalable Distance Index for Validation of Paired-End Alignments in Sequence Graphs

Authors: Ali Ghaffaari, Alexander Schönhuth, and Tobias Marschall

Published in: LIPIcs, Volume 344, 25th International Conference on Algorithms for Bioinformatics (WABI 2025)


Abstract
Determining the distance between two loci within a genomic region is a recurrent operation in various tasks in computational genomics. A notable example of this task arises in paired-end read mapping as a form of validation of distances between multiple alignments. While straightforward for a single genome, graph-based reference structures render the operation considerably more involved. Given the sheer number of such queries in a typical read mapping experiment, an efficient algorithm for answering distance queries is crucial. In this paper, we introduce DiVerG, a compact data structure as well as a fast and scalable algorithm, for constructing distance indexes for general sequence graphs on multi-core CPU and many-core GPU architectures. DiVerG is based on PairG [Jain et al., 2019], but overcomes the limitations of PairG by exploiting the extensive potential for improvements in terms of scalability and space efficiency. As a consequence, DiVerG can process substantially larger datasets, such as whole human genomes, which are unmanageable by PairG. DiVerG offers faster index construction time and consistently faster query time with gains proportional to the size of the underlying compact data structure. We demonstrate that our method performs favorably on multiple real datasets at various scales. DiVerG achieves superior performance over PairG; e.g. resulting to 2.5-4x speed-up in query time, 44-340x smaller index size, and 3-50x faster construction time for the genome graph of the MHC region, as a particularly variable region of the human genome. The implementation is available at: https://github.com/cartoonist/diverg

Cite as

Ali Ghaffaari, Alexander Schönhuth, and Tobias Marschall. DiVerG: Scalable Distance Index for Validation of Paired-End Alignments in Sequence Graphs. In 25th International Conference on Algorithms for Bioinformatics (WABI 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 344, pp. 10:1-10:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{ghaffaari_et_al:LIPIcs.WABI.2025.10,
  author =	{Ghaffaari, Ali and Sch\"{o}nhuth, Alexander and Marschall, Tobias},
  title =	{{DiVerG: Scalable Distance Index for Validation of Paired-End Alignments in Sequence Graphs}},
  booktitle =	{25th International Conference on Algorithms for Bioinformatics (WABI 2025)},
  pages =	{10:1--10:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-386-7},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{344},
  editor =	{Brejov\'{a}, Bro\v{n}a and Patro, Rob},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2025.10},
  URN =		{urn:nbn:de:0030-drops-239369},
  doi =		{10.4230/LIPIcs.WABI.2025.10},
  annote =	{Keywords: Sequence graph, distance index, read mapping, sparse matrix}
}
Document
Solving the Agile Earth Observation Satellite Scheduling Problem with CP and Local Search

Authors: Valentin Antuori, Damien T. Wojtowicz, and Emmanuel Hebrard

Published in: LIPIcs, Volume 340, 31st International Conference on Principles and Practice of Constraint Programming (CP 2025)


Abstract
The increasing hunger for remote sensing data fuels a boom in satellite imagery, leading to larger agile Earth observation satellite (AEOS) constellations. Therefore, instances of the AEOS scheduling problem (AEOSSP) has become harder to solve. As most existing approaches to solve AEOSSP are designed for a single spacecraft or smaller constellations in mind, they are not tailored to the need of our industrial partner that is about to launch a constellation of 20 AEOSs. Hence, we designed a local search solver able to schedule observations and downloads at such a scale. It relies on solving a series of sub-problems as travelling salesman problem with time windows (TSPTW), first greedily, then using a CP-SAT exact solver in order to find a solution when the greedy insertion fails. Lastly, it schedules downloads and enforces memory constraints with greedy algorithms. Experiments were carried out on instances from the literature as well as generated instances from a simulator we designed. Our experiments show that using CP to solve the sub-problem significantly improve the solutions, and overall our method is slightly better than state-of-the-art approaches.

Cite as

Valentin Antuori, Damien T. Wojtowicz, and Emmanuel Hebrard. Solving the Agile Earth Observation Satellite Scheduling Problem with CP and Local Search. In 31st International Conference on Principles and Practice of Constraint Programming (CP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 340, pp. 3:1-3:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{antuori_et_al:LIPIcs.CP.2025.3,
  author =	{Antuori, Valentin and Wojtowicz, Damien T. and Hebrard, Emmanuel},
  title =	{{Solving the Agile Earth Observation Satellite Scheduling Problem with CP and Local Search}},
  booktitle =	{31st International Conference on Principles and Practice of Constraint Programming (CP 2025)},
  pages =	{3:1--3:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-380-5},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{340},
  editor =	{de la Banda, Maria Garcia},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2025.3},
  URN =		{urn:nbn:de:0030-drops-238647},
  doi =		{10.4230/LIPIcs.CP.2025.3},
  annote =	{Keywords: Local Search, Greedy Algorithms, Aerospace Applications}
}
Document
Reducing Quantum Circuit Synthesis to #SAT

Authors: Dekel Zak, Jingyi Mei, Jean-Marie Lagniez, and Alfons Laarman

Published in: LIPIcs, Volume 340, 31st International Conference on Principles and Practice of Constraint Programming (CP 2025)


Abstract
Quantum circuit synthesis is the task of decomposing a given quantum operator into a sequence of elementary quantum gates. Since the finite target gate set cannot exactly implement any given operator, approximation is often necessary. Model counting, or #SAT, has recently been demonstrated as a promising new approach for tackling core problems in quantum circuit analysis. In this work, we show for the first time that the universal quantum circuit synthesis problem can be reduced to maximum model counting. We formulate a #SAT encoding for exact and approximate depth-optimal quantum circuit synthesis into the Clifford+T gate set. We evaluate our method with an open-source implementation that uses the maximum model counter d4Max as a backend. For this purpose, we extended d4Max with support for complex and negative weights to represent amplitudes. Experimental results show that existing classical tools have potential for the quantum circuit synthesis problem.

Cite as

Dekel Zak, Jingyi Mei, Jean-Marie Lagniez, and Alfons Laarman. Reducing Quantum Circuit Synthesis to #SAT. In 31st International Conference on Principles and Practice of Constraint Programming (CP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 340, pp. 38:1-38:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{zak_et_al:LIPIcs.CP.2025.38,
  author =	{Zak, Dekel and Mei, Jingyi and Lagniez, Jean-Marie and Laarman, Alfons},
  title =	{{Reducing Quantum Circuit Synthesis to #SAT}},
  booktitle =	{31st International Conference on Principles and Practice of Constraint Programming (CP 2025)},
  pages =	{38:1--38:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-380-5},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{340},
  editor =	{de la Banda, Maria Garcia},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CP.2025.38},
  URN =		{urn:nbn:de:0030-drops-238997},
  doi =		{10.4230/LIPIcs.CP.2025.38},
  annote =	{Keywords: Maximum weighted model counting, quantum circuit synthesis}
}
Document
Enumerating All Boolean Matches

Authors: Alexander Nadel and Yogev Shalmon

Published in: LIPIcs, Volume 341, 28th International Conference on Theory and Applications of Satisfiability Testing (SAT 2025)


Abstract
Boolean matching, a fundamental problem in circuit design, determines whether two Boolean circuits are equivalent under input/output permutations and negations. While most works focus on finding a single match or proving its absence, the problem of enumerating all matches remains largely unexplored, with BooM being a notable exception. Motivated by timing challenges in Intel’s library mapping flow, we introduce EBat - an open-source tool for enumerating all matches between single-output circuits. Built from scratch, EBat reuses BooM’s SAT encoding and introduces novel high-level algorithms and performance-critical subroutines to efficiently identify and block multiple mismatches and matches simultaneously. Experiments demonstrate that EBat substantially outperforms BooM’s baseline algorithm, solving 3 to 4 times more benchmarks within a given time limit. EBat has been productized as part of Intel’s library mapping flow, effectively addressing the timing challenges.

Cite as

Alexander Nadel and Yogev Shalmon. Enumerating All Boolean Matches. In 28th International Conference on Theory and Applications of Satisfiability Testing (SAT 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 341, pp. 22:1-22:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{nadel_et_al:LIPIcs.SAT.2025.22,
  author =	{Nadel, Alexander and Shalmon, Yogev},
  title =	{{Enumerating All Boolean Matches}},
  booktitle =	{28th International Conference on Theory and Applications of Satisfiability Testing (SAT 2025)},
  pages =	{22:1--22:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-381-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{341},
  editor =	{Berg, Jeremias and Nordstr\"{o}m, Jakob},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SAT.2025.22},
  URN =		{urn:nbn:de:0030-drops-237568},
  doi =		{10.4230/LIPIcs.SAT.2025.22},
  annote =	{Keywords: Boolean Matching, All-Boolean-Matching, Enumeration, SAT, Generalization}
}
Document
Efficient Certified Reasoning for Binarized Neural Networks

Authors: Jiong Yang, Yong Kiam Tan, Mate Soos, Magnus O. Myreen, and Kuldeep S. Meel

Published in: LIPIcs, Volume 341, 28th International Conference on Theory and Applications of Satisfiability Testing (SAT 2025)


Abstract
Neural networks have emerged as essential components in safety-critical applications - these use cases demand complex, yet trustworthy computations. Binarized Neural Networks (BNNs) are a type of neural network where each neuron is constrained to a Boolean value; they are particularly well-suited for safety-critical tasks because they retain much of the computational capacities of full-scale (floating-point or quantized) deep neural networks, but remain compatible with satisfiability solvers for qualitative verification and with model counters for quantitative reasoning. However, existing methods for BNN analysis suffer from either limited scalability or susceptibility to soundness errors, which hinders their applicability in real-world scenarios. In this work, we present a scalable and trustworthy approach for both qualitative and quantitative verification of BNNs. Our approach introduces a native representation of BNN constraints in a custom-designed solver for qualitative reasoning, and in an approximate model counter for quantitative reasoning. We further develop specialized proof generation and checking pipelines with native support for BNN constraint reasoning, ensuring trustworthiness for all of our verification results. Empirical evaluations on a BNN robustness verification benchmark suite demonstrate that our certified solving approach achieves a 9× speedup over prior certified CNF and PB-based approaches, and our certified counting approach achieves a 218× speedup over the existing CNF-based baseline. In terms of coverage, our pipeline produces fully certified results for 99% and 86% of the qualitative and quantitative reasoning queries on BNNs, respectively. This is in sharp contrast to the best existing baselines which can fully certify only 62% and 4% of the queries, respectively.

Cite as

Jiong Yang, Yong Kiam Tan, Mate Soos, Magnus O. Myreen, and Kuldeep S. Meel. Efficient Certified Reasoning for Binarized Neural Networks. In 28th International Conference on Theory and Applications of Satisfiability Testing (SAT 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 341, pp. 32:1-32:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{yang_et_al:LIPIcs.SAT.2025.32,
  author =	{Yang, Jiong and Tan, Yong Kiam and Soos, Mate and Myreen, Magnus O. and Meel, Kuldeep S.},
  title =	{{Efficient Certified Reasoning for Binarized Neural Networks}},
  booktitle =	{28th International Conference on Theory and Applications of Satisfiability Testing (SAT 2025)},
  pages =	{32:1--32:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-381-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{341},
  editor =	{Berg, Jeremias and Nordstr\"{o}m, Jakob},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SAT.2025.32},
  URN =		{urn:nbn:de:0030-drops-237665},
  doi =		{10.4230/LIPIcs.SAT.2025.32},
  annote =	{Keywords: Neural network verification, proof certification, SAT solving, approximate model counting}
}
Document
Continuous Map Matching to Paths Under Travel Time Constraints

Authors: Yannick Bosch and Sabine Storandt

Published in: LIPIcs, Volume 338, 23rd International Symposium on Experimental Algorithms (SEA 2025)


Abstract
In this paper, we study the problem of map matching with travel time constraints. Given a sequence of k spatio-temporal measurements and an embedded path graph with travel time costs, the goal is to snap each measurement to a close-by location in the graph, such that consecutive locations can be reached from one another along the path within the timestamp difference of the respective measurements. This problem arises in public transit data processing as well as in map matching of movement trajectories to general graphs. We show that the classical approach for this problem, which relies on selecting a finite set of candidate locations in the graph for each measurement, cannot guarantee to find a consistent solution. We propose a new algorithm that can deal with an infinite set of candidate locations per measurement. We prove that our algorithm always detects a consistent map matching path (if one exists). Despite the enlarged candidate set, we also demonstrate that our algorithm has superior running time in theory and practice. For a path graph with n nodes, we show that our algorithm runs in 𝒪(k² n log {nk}) and under mild assumptions in 𝒪(k n ^λ + n log³ n) for λ ≈ 0.695. This is a significant improvement over the baseline, which runs in 𝒪(k n²) and which might not even identify a correct solution. The performance of our algorithm hinges on an efficient segment-circle intersection data structure. We describe how to design and implement such a data structure for our application. In the experimental evaluation, we demonstrate the usefulness of our novel algorithm on a diverse set of generated measurements as well as GTFS data.

Cite as

Yannick Bosch and Sabine Storandt. Continuous Map Matching to Paths Under Travel Time Constraints. In 23rd International Symposium on Experimental Algorithms (SEA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 338, pp. 7:1-7:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bosch_et_al:LIPIcs.SEA.2025.7,
  author =	{Bosch, Yannick and Storandt, Sabine},
  title =	{{Continuous Map Matching to Paths Under Travel Time Constraints}},
  booktitle =	{23rd International Symposium on Experimental Algorithms (SEA 2025)},
  pages =	{7:1--7:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-375-1},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{338},
  editor =	{Mutzel, Petra and Prezza, Nicola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2025.7},
  URN =		{urn:nbn:de:0030-drops-232457},
  doi =		{10.4230/LIPIcs.SEA.2025.7},
  annote =	{Keywords: Map matching, Travel time, Segment-circle intersection data structure}
}
Document
Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems

Authors: Binqi Sun, Zhihang Wei, Andrea Bastoni, Debayan Roy, Mirco Theile, Tomasz Kloda, Rodolfo Pellizzoni, and Marco Caccamo

Published in: LIPIcs, Volume 335, 37th Euromicro Conference on Real-Time Systems (ECRTS 2025)


Abstract
Memory bandwidth regulation and cache partitioning are widely used techniques for achieving predictable timing in real-time computing systems. Combined with partitioned scheduling, these methods require careful co-allocation of tasks and resources to cores, as task execution times strongly depend on available allocated resources. To address this challenge, this paper presents a 0-1 linear program for task-resource co-allocation, along with a multi-objective heuristic designed to minimize resource usage while guaranteeing schedulability under a preemptive EDF scheduling policy. Our heuristic employs a multi-layer framework, where an outer layer explores resource allocations using Pareto-pruned search, and an inner layer optimizes task allocation by solving a knapsack problem using dynamic programming. To evaluate the performance of the proposed optimization algorithm, we profile real-world benchmarks on an embedded AMD UltraScale+ ZCU102 platform, with fine-grained resource partitioning enabled by the Jailhouse hypervisor, leveraging cache set partitioning and MemGuard for memory bandwidth regulation. Experiments based on the benchmarking results show that the proposed 0-1 linear program outperforms existing mixed-integer programs by finding more optimal solutions within the same time limit. Moreover, the proposed multi-objective multi-layer heuristic performs consistently better than the state-of-the-art multi-resource-task co-allocation algorithm in terms of schedulability, resource usage, number of non-dominated solutions, and computational efficiency.

Cite as

Binqi Sun, Zhihang Wei, Andrea Bastoni, Debayan Roy, Mirco Theile, Tomasz Kloda, Rodolfo Pellizzoni, and Marco Caccamo. Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems. In 37th Euromicro Conference on Real-Time Systems (ECRTS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 335, pp. 2:1-2:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{sun_et_al:LIPIcs.ECRTS.2025.2,
  author =	{Sun, Binqi and Wei, Zhihang and Bastoni, Andrea and Roy, Debayan and Theile, Mirco and Kloda, Tomasz and Pellizzoni, Rodolfo and Caccamo, Marco},
  title =	{{Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems}},
  booktitle =	{37th Euromicro Conference on Real-Time Systems (ECRTS 2025)},
  pages =	{2:1--2:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-377-5},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{335},
  editor =	{Mancuso, Renato},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECRTS.2025.2},
  URN =		{urn:nbn:de:0030-drops-235807},
  doi =		{10.4230/LIPIcs.ECRTS.2025.2},
  annote =	{Keywords: Multi-objective optimization, memory bandwidth regulation, cache partitioning, partitioned scheduling, real-time systems}
}
Document
Track A: Algorithms, Complexity and Games
Dynamic Algorithms for Submodular Matching

Authors: Kiarash Banihashem, Leyla Biabani, Samira Goudarzi, MohammadTaghi Hajiaghayi, Peyman Jabbarzade, and Morteza Monemizadeh

Published in: LIPIcs, Volume 334, 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)


Abstract
The Maximum Submodular Matching (MSM) problem is a generalization of the classical Maximum Weight Matching (MWM) problem. In this problem, given a monotone submodular function f: 2^E → ℝ^{≥ 0} defined over subsets of edges of a graph G(V, E), we are asked to return a matching whose submodular value is maximum among all matchings in graph G(V, E). In this paper, we consider this problem in a fully dynamic setting against an oblivious adversary. In this setting, we are given a sequence 𝒮 of insertions and deletions of edges of the underlying graph G(V, E), along with an oracle access to the monotone submodular function f. The goal is to maintain a matching M such that, at any time t of sequence 𝒮, its submodular value is a good approximation of the value of the optimal submodular matching while keeping the number of operations minimal. We develop the first dynamic algorithm for the submodular matching problem, in which we maintain a matching whose submodular value is within expected (8 + ε)-approximation of the optimal submodular matching at any time t of sequence 𝒮 using expected amortized poly(log n, 1/(ε)) update time. Our approach incorporates a range of novel techniques, notably the concept of Uniform Hierarchical Caches (UHC) data structure along with its invariants, which lead to the first algorithm for fully dynamic submodular matching and may be of independent interest for designing dynamic algorithms for other problems.

Cite as

Kiarash Banihashem, Leyla Biabani, Samira Goudarzi, MohammadTaghi Hajiaghayi, Peyman Jabbarzade, and Morteza Monemizadeh. Dynamic Algorithms for Submodular Matching. In 52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 334, pp. 19:1-19:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{banihashem_et_al:LIPIcs.ICALP.2025.19,
  author =	{Banihashem, Kiarash and Biabani, Leyla and Goudarzi, Samira and Hajiaghayi, MohammadTaghi and Jabbarzade, Peyman and Monemizadeh, Morteza},
  title =	{{Dynamic Algorithms for Submodular Matching}},
  booktitle =	{52nd International Colloquium on Automata, Languages, and Programming (ICALP 2025)},
  pages =	{19:1--19:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-372-0},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{334},
  editor =	{Censor-Hillel, Keren and Grandoni, Fabrizio and Ouaknine, Jo\"{e}l and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2025.19},
  URN =		{urn:nbn:de:0030-drops-233969},
  doi =		{10.4230/LIPIcs.ICALP.2025.19},
  annote =	{Keywords: Matching, Submodular, Dynamic, Polylogarithmic}
}
Document
Efficient Neural Network Verification via Order Leading Exploration of Branch-and-Bound Trees

Authors: Guanqin Zhang, Kota Fukuda, Zhenya Zhang, H.M.N. Dilum Bandara, Shiping Chen, Jianjun Zhao, and Yulei Sui

Published in: LIPIcs, Volume 333, 39th European Conference on Object-Oriented Programming (ECOOP 2025)


Abstract
The vulnerability of neural networks to adversarial perturbations has necessitated formal verification techniques that can rigorously certify the quality of neural networks. As the state-of-the-art, branch-and-bound (BaB) is a "divide-and-conquer" strategy that applies off-the-shelf verifiers to sub-problems for which they perform better. While BaB can identify the sub-problems that are necessary to be split, it explores the space of these sub-problems in a naive "first-come-first-served" manner, thereby suffering from an issue of inefficiency to reach a verification conclusion. To bridge this gap, we introduce an order over different sub-problems produced by BaB, concerning with their different likelihoods of containing counterexamples. Based on this order, we propose a novel verification framework Oliva that explores the sub-problem space by prioritizing those sub-problems that are more likely to find counterexamples, in order to efficiently reach the conclusion of the verification. Even if no counterexample can be found in any sub-problem, it only changes the order of visiting different sub-problems and so will not lead to a performance degradation. Specifically, Oliva has two variants, including Oliva^GR, a greedy strategy that always prioritizes the sub-problems that are more likely to find counterexamples, and Oliva^SA, a balanced strategy inspired by simulated annealing that gradually shifts from exploration to exploitation to locate the globally optimal sub-problems. We experimentally evaluate the performance of Oliva on 690 verification problems spanning over 5 models with datasets MNIST and CIFAR-10. Compared to the state-of-the-art approaches, we demonstrate the speedup of Oliva for up to 25× in MNIST, and up to 80× in CIFAR-10.

Cite as

Guanqin Zhang, Kota Fukuda, Zhenya Zhang, H.M.N. Dilum Bandara, Shiping Chen, Jianjun Zhao, and Yulei Sui. Efficient Neural Network Verification via Order Leading Exploration of Branch-and-Bound Trees. In 39th European Conference on Object-Oriented Programming (ECOOP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 333, pp. 36:1-36:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{zhang_et_al:LIPIcs.ECOOP.2025.36,
  author =	{Zhang, Guanqin and Fukuda, Kota and Zhang, Zhenya and Bandara, H.M.N. Dilum and Chen, Shiping and Zhao, Jianjun and Sui, Yulei},
  title =	{{Efficient Neural Network Verification via Order Leading Exploration of Branch-and-Bound Trees}},
  booktitle =	{39th European Conference on Object-Oriented Programming (ECOOP 2025)},
  pages =	{36:1--36:29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-373-7},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{333},
  editor =	{Aldrich, Jonathan and Silva, Alexandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2025.36},
  URN =		{urn:nbn:de:0030-drops-233281},
  doi =		{10.4230/LIPIcs.ECOOP.2025.36},
  annote =	{Keywords: neural network verification, branch and bound, counterexample potentiality, simulated annealing, stochastic optimization}
}
Document
FuzzFlesh: Randomised Testing of Decompilers via Control Flow Graph-Based Program Generation

Authors: Amber Gorzynski and Alastair F. Donaldson

Published in: LIPIcs, Volume 333, 39th European Conference on Object-Oriented Programming (ECOOP 2025)


Abstract
Decompilation is the process of translating compiled code into high-level code. Control flow recovery is a challenging part of the process. "Misdecompilations" can occur, whereby the decompiled code does not accurately represent the semantics of the compiled code, despite it being syntactically valid. This is problematic because it can mislead users who are trying to reason about the program. We present CFG-based program generation: a novel approach to randomised testing that aims to improve the control flow recovery of decompilers. CFG-based program generation involves randomly generating control flow graphs (CFGs) and paths through each graph. Inspired by prior work in the domain of GPU computing, (CFG, path) pairs are "fleshed" into test programs. Each program is decompiled and recompiled. The test oracle verifies whether the actual runtime path through the graph matches the expected path. Any difference in the execution paths after recompilation indicates a possible misdecompilation. A key benefit of this approach is that it is largely independent of the source and target languages in question because it is focused on control flow. The approach is therefore applicable to numerous decompilation settings. The trade-off resulting from the focus on control flow is that misdecompilation bugs that do not relate to control flow (e.g. bugs that involve specific arithmetic operations) are out of scope. We have implemented this approach in FuzzFlesh, an open-source randomised testing tool. FuzzFlesh can be easily configured to target a variety of low-level languages and decompiler toolchains because most of the CFG and path generation process is language-independent. At present, FuzzFlesh supports testing decompilation of Java bytecode, .NET assembly and x86 machine code. In addition to program generation, FuzzFlesh also includes an automated test-case reducer that operates on the CFG rather than the low-level program, which means that it can be applied to any of the target languages. We present a large experimental campaign applying FuzzFlesh to a variety of decompilers, leading to the discovery of 12 previously-unknown bugs across two language formats, six of which have been fixed. We present experiments comparing our generic FuzzFlesh tool to two state-of-the-art decompiler testing tools targeted at specific languages. As expected, the coverage our generic FuzzFlesh tool achieves on a given decompiler is lower than the coverage achieved by a tool specifically designed for the input format of that decompiler. However, due to its focus on control flow, FuzzFlesh is able to cover sections of control flow recovery code that the targeted tools cannot reach, and identify control flow related bugs that the targeted tools miss.

Cite as

Amber Gorzynski and Alastair F. Donaldson. FuzzFlesh: Randomised Testing of Decompilers via Control Flow Graph-Based Program Generation. In 39th European Conference on Object-Oriented Programming (ECOOP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 333, pp. 13:1-13:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{gorzynski_et_al:LIPIcs.ECOOP.2025.13,
  author =	{Gorzynski, Amber and Donaldson, Alastair F.},
  title =	{{FuzzFlesh: Randomised Testing of Decompilers via Control Flow Graph-Based Program Generation}},
  booktitle =	{39th European Conference on Object-Oriented Programming (ECOOP 2025)},
  pages =	{13:1--13:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-373-7},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{333},
  editor =	{Aldrich, Jonathan and Silva, Alexandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2025.13},
  URN =		{urn:nbn:de:0030-drops-233062},
  doi =		{10.4230/LIPIcs.ECOOP.2025.13},
  annote =	{Keywords: Decompiler, Reverse Engineering, Control Flow, Software Testing, Fuzzing}
}
  • Refine by Type
  • 36 Document/PDF
  • 27 Document/HTML

  • Refine by Publication Year
  • 22 2025
  • 1 2024
  • 6 2023
  • 3 2022
  • 1 2021
  • Show More...

  • Refine by Author
  • 3 Lin, Wei-Kai
  • 3 Lissandrini, Matteo
  • 3 Shi, Elaine
  • 2 Biswas, Russa
  • 2 Bonifati, Angela
  • Show More...

  • Refine by Series/Journal
  • 20 LIPIcs
  • 2 OASIcs
  • 4 LITES
  • 9 TGDK
  • 1 DagMan

  • Refine by Classification
  • 4 Computing methodologies → Knowledge representation and reasoning
  • 3 Information systems → Graph-based database models
  • 3 Theory of computation → Cryptographic protocols
  • 2 Computing methodologies → Artificial intelligence
  • 2 Computing methodologies → Natural language processing
  • Show More...

  • Refine by Keyword
  • 4 Knowledge Graphs
  • 2 Explainable AI
  • 2 Large Language Models
  • 1 AI
  • 1 Accountability
  • Show More...

Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail