38 Search Results for "Robinson, Peter"


Document
Climate Change: What is Computing’s Responsibility? (Dagstuhl Perspectives Workshop 25122)

Authors: Bran Knowles, Vicki L. Hanson, Christoph Becker, Mike Berners-Lee, Andrew A. Chien, Benoit Combemale, Vlad Coroamă, Koen De Bosschere, Yi Ding, Adrian Friday, Boris Gamazaychikov, Lynda Hardman, Simon Hinterholzer, Mattias Höjer, Lynn Kaack, Lenneke Kuijer, Anne-Laure Ligozat, Jan Tobias Muehlberg, Yunmook Nah, Thomas Olsson, Anne-Cécile Orgerie, Daniel Pargman, Birgit Penzenstadler, Tom Romanoff, Emma Strubell, Colin Venters, and Junhua Zhao

Published in: Dagstuhl Manifestos, Volume 11, Issue 1 (2025)


Abstract
This Manifesto was produced from the Perspectives Workshop 25122 entitled "Climate Change: What is Computing’s Responsibility?" held March 16-19, 2025 at Schloss Dagstuhl, Germany. The Workshop provided a forum for world-leading computer scientists and expert consultants on environmental policy and sustainable transition to engage in a critical and urgent conversation about computing’s responsibilities in addressing climate change - or more aptly, climate crisis. The resulting Manifesto outlines commitments and directions for future action which, if adopted as a basis for more responsible computing practices, will help ensure that these technologies do not threaten the long-term habitability of the planet. We preface our Manifesto with a recognition that humanity is on a path that is not in agreement with international global warming targets and explore how computing technologies are currently hastening the overshoot of these boundaries. We critically assess the vaunted potential for harnessing computing technologies for the mitigation of global warming, agreeing that, under current circumstances, computing is contributing to negative environmental impacts in other sectors. Computing primarily improves efficiency and reduces costs which leads to more consumption and more negative environmental impact. Relying solely on efficiency gains in computing has thus far proven to be insufficient to curb global greenhouse gas emissions. Therefore, computing’s purpose within a strategy for tackling climate change must be reimagined. Our recommendations cover changes that need to be urgently made to the design priorities of computing technologies, but also speak to the more systemic shift in mindset, with sustainability and human rights providing a necessary moral foundation for developing the kinds of computing technologies most needed by society. We also stress the importance of digital policy that accounts for both the direct material impacts of computing and the detrimental indirect impacts arising from computing-enabled efficiencies, and the role of computing professionals in informing policy making.

Cite as

Bran Knowles, Vicki L. Hanson, Christoph Becker, Mike Berners-Lee, Andrew A. Chien, Benoit Combemale, Vlad Coroamă, Koen De Bosschere, Yi Ding, Adrian Friday, Boris Gamazaychikov, Lynda Hardman, Simon Hinterholzer, Mattias Höjer, Lynn Kaack, Lenneke Kuijer, Anne-Laure Ligozat, Jan Tobias Muehlberg, Yunmook Nah, Thomas Olsson, Anne-Cécile Orgerie, Daniel Pargman, Birgit Penzenstadler, Tom Romanoff, Emma Strubell, Colin Venters, and Junhua Zhao. Climate Change: What is Computing’s Responsibility? (Dagstuhl Perspectives Workshop 25122). In Dagstuhl Manifestos, Volume 11, Issue 1, pp. 1-18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Article{knowles_et_al:DagMan.11.1.1,
  author =	{Knowles, Bran and Hanson, Vicki L. and Becker, Christoph and Berners-Lee, Mike and Chien, Andrew A. and Combemale, Benoit and Coroam\u{a}, Vlad and De Bosschere, Koen and Ding, Yi and Friday, Adrian and Gamazaychikov, Boris and Hardman, Lynda and Hinterholzer, Simon and H\"{o}jer, Mattias and Kaack, Lynn and Kuijer, Lenneke and Ligozat, Anne-Laure and Muehlberg, Jan Tobias and Nah, Yunmook and Olsson, Thomas and Orgerie, Anne-C\'{e}cile and Pargman, Daniel and Penzenstadler, Birgit and Romanoff, Tom and Strubell, Emma and Venters, Colin and Zhao, Junhua},
  title =	{{Climate Change: What is Computing’s Responsibility? (Dagstuhl Perspectives Workshop 25122)}},
  pages =	{1--18},
  journal =	{Dagstuhl Manifestos},
  ISSN =	{2193-2433},
  year =	{2025},
  volume =	{11},
  number =	{1},
  editor =	{Knowles, Bran and Hanson, Vicki L. and Becker, Christoph and Berners-Lee, Mike and Chien, Andrew A. and Combemale, Benoit and Coroam\u{a}, Vlad and De Bosschere, Koen and Ding, Yi and Friday, Adrian and Gamazaychikov, Boris and Hardman, Lynda and Hinterholzer, Simon and H\"{o}jer, Mattias and Kaack, Lynn and Kuijer, Lenneke and Ligozat, Anne-Laure and Muehlberg, Jan Tobias and Nah, Yunmook and Olsson, Thomas and Orgerie, Anne-C\'{e}cile and Pargman, Daniel and Penzenstadler, Birgit and Romanoff, Tom and Strubell, Emma and Venters, Colin and Zhao, Junhua},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagMan.11.1.1},
  URN =		{urn:nbn:de:0030-drops-250724},
  doi =		{10.4230/DagMan.11.1.1},
  annote =	{Keywords: sustainability, climate change, efficiency, supply chain management, climate modelling}
}
Document
Fast and Lightweight Distributed Suffix Array Construction

Authors: Manuel Haag, Florian Kurpicz, Peter Sanders, and Matthias Schimek

Published in: LIPIcs, Volume 351, 33rd Annual European Symposium on Algorithms (ESA 2025)


Abstract
The suffix array contains the lexicographical order of all suffixes of a text. It is one of the most well-studied text indices with applications in bioinformatics, compression, and pattern matching. The main bottleneck of distributed-memory suffix array construction algorithms is their memory requirements. Even careful implementations require 30×-60× the input size as working memory. We present a scalable and lightweight distributed-memory adaptation of the difference cover (DCX) suffix array construction algorithm. Our approach relies on novel bucketing and random chunk redistribution techniques which reduce our memory requirement to 20×-26× the input size for medium-sized inputs and to 14×-15× for large-sized inputs. Regarding running time, we achieve speedups of up to 5× over current state-of-the-art distributed suffix array construction algorithms.

Cite as

Manuel Haag, Florian Kurpicz, Peter Sanders, and Matthias Schimek. Fast and Lightweight Distributed Suffix Array Construction. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 47:1-47:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{haag_et_al:LIPIcs.ESA.2025.47,
  author =	{Haag, Manuel and Kurpicz, Florian and Sanders, Peter and Schimek, Matthias},
  title =	{{Fast and Lightweight Distributed Suffix Array Construction}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{47:1--47:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.47},
  URN =		{urn:nbn:de:0030-drops-245154},
  doi =		{10.4230/LIPIcs.ESA.2025.47},
  annote =	{Keywords: Distributed Computing, Suffix Array Construction}
}
Document
Improved Parallel Derandomization via Finite Automata with Applications

Authors: Jeff Giliberti and David G. Harris

Published in: LIPIcs, Volume 351, 33rd Annual European Symposium on Algorithms (ESA 2025)


Abstract
A central approach to algorithmic derandomization is the construction of small-support probability distributions that "fool” randomized algorithms, often enabling efficient parallel (NC) implementations. An abstraction of this idea is fooling polynomial-space statistical tests computed via finite automata [Sivakumar STOC'02]; this encompasses a wide range of properties including k-wise independence and sums of random variables. We present new parallel algorithms to fool finite-state automata, with significantly reduced processor complexity. Briefly, our approach is to iteratively sparsify distributions using a work-efficient lattice rounding routine and maintain accuracy by tracking an aggregate weighted error that is determined by the Lipschitz value of the statistical tests being fooled. We illustrate with improved applications to the Gale-Berlekamp Switching Game and to approximate MAX-CUT via SDP rounding. These involve further several optimizations, such as the truncation of the state space of the automata and FFT-based convolutions to compute transition probabilities efficiently.

Cite as

Jeff Giliberti and David G. Harris. Improved Parallel Derandomization via Finite Automata with Applications. In 33rd Annual European Symposium on Algorithms (ESA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 351, pp. 70:1-70:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{giliberti_et_al:LIPIcs.ESA.2025.70,
  author =	{Giliberti, Jeff and Harris, David G.},
  title =	{{Improved Parallel Derandomization via Finite Automata with Applications}},
  booktitle =	{33rd Annual European Symposium on Algorithms (ESA 2025)},
  pages =	{70:1--70:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-395-9},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{351},
  editor =	{Benoit, Anne and Kaplan, Haim and Wild, Sebastian and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2025.70},
  URN =		{urn:nbn:de:0030-drops-245381},
  doi =		{10.4230/LIPIcs.ESA.2025.70},
  annote =	{Keywords: Parallel Algorithms, Derandomization, MAX-CUT, Gale-Berlekamp Switching Game}
}
Document
Canonical for Automated Theorem Proving in Lean

Authors: Chase Norman and Jeremy Avigad

Published in: LIPIcs, Volume 352, 16th International Conference on Interactive Theorem Proving (ITP 2025)


Abstract
Canonical is a solver for type inhabitation in dependent type theory, that is, the problem of producing a term of a given type. We present a Lean tactic which invokes Canonical to generate proof terms and synthesize programs. The tactic supports higher-order and dependently-typed goals, structural recursion over indexed inductive types, and definitional equality. Canonical finds proofs for 84% of Natural Number Game problems in 51 seconds total.

Cite as

Chase Norman and Jeremy Avigad. Canonical for Automated Theorem Proving in Lean. In 16th International Conference on Interactive Theorem Proving (ITP 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 352, pp. 14:1-14:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{norman_et_al:LIPIcs.ITP.2025.14,
  author =	{Norman, Chase and Avigad, Jeremy},
  title =	{{Canonical for Automated Theorem Proving in Lean}},
  booktitle =	{16th International Conference on Interactive Theorem Proving (ITP 2025)},
  pages =	{14:1--14:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-396-6},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{352},
  editor =	{Forster, Yannick and Keller, Chantal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITP.2025.14},
  URN =		{urn:nbn:de:0030-drops-246128},
  doi =		{10.4230/LIPIcs.ITP.2025.14},
  annote =	{Keywords: Automated Reasoning, Interactive Theorem Proving, Dependent Type Theory, Inhabitation, Unification, Program Synthesis, Formal Methods}
}
Document
Virtual Reality Prototyping Environment for Concurrent Design, Training and Rover Operations

Authors: Pinar Dogru, Hanjo Schnellbächer, Tarek Can Battikh, and Kristina Remić

Published in: OASIcs, Volume 130, Advancing Human-Computer Interaction for Space Exploration (SpaceCHI 2025)


Abstract
As part of the CASIMAR (Collaborative Astronaut Supporting Interregional Moon Analog Rover) project, initiated by the BVSR e.V. (Bundesverband Studentischer Raumfahrt), the TUDSaT (TU Darmstadt Space Technology e.V.) team is developing a Virtual Reality (VR) prototype environment to support the interdisciplinary design process of lunar exploration technologies. Given the complexity of collaboration among eight organizations, this tool aims to streamline design integration and enhance mission planning. The primary objective is to create a comprehensive 3D model of the rover, complete with predefined procedures and activities, to simulate astronaut-robot interaction. By leveraging VR technology, astronauts can familiarize themselves with the rover and its EVA (Extravehicular Activity) tools before actual deployment, improving operational safety and efficiency. Beyond training applications, this virtual environment serves as a critical platform for designing, testing, and benchmarking rover functionalities and EVA procedures. Ultimately, our work contributes to optimizing human-robotic interaction, ensuring that lunar exploration missions are both effective and well-prepared before reaching the Moon.

Cite as

Pinar Dogru, Hanjo Schnellbächer, Tarek Can Battikh, and Kristina Remić. Virtual Reality Prototyping Environment for Concurrent Design, Training and Rover Operations. In Advancing Human-Computer Interaction for Space Exploration (SpaceCHI 2025). Open Access Series in Informatics (OASIcs), Volume 130, pp. 32:1-32:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{dogru_et_al:OASIcs.SpaceCHI.2025.32,
  author =	{Dogru, Pinar and Schnellb\"{a}cher, Hanjo and Battikh, Tarek Can and Remi\'{c}, Kristina},
  title =	{{Virtual Reality Prototyping Environment for Concurrent Design, Training and Rover Operations}},
  booktitle =	{Advancing Human-Computer Interaction for Space Exploration (SpaceCHI 2025)},
  pages =	{32:1--32:13},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-384-3},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{130},
  editor =	{Bensch, Leonie and Nilsson, Tommy and Nisser, Martin and Pataranutaporn, Pat and Schmidt, Albrecht and Sumini, Valentina},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SpaceCHI.2025.32},
  URN =		{urn:nbn:de:0030-drops-240226},
  doi =		{10.4230/OASIcs.SpaceCHI.2025.32},
  annote =	{Keywords: virtual reality (VR), digital twin, human-robot-interaction (HRI), LUNA analog facility, rover, extravehicular activities (EVA), gamification, simulation, user-centered design (UCD), concurrent engineering (CE), space system engineering}
}
Document
On the Effectiveness of Interpreter-Guided Compiler Testing

Authors: Federico Lochbaum and Guillermo Polito

Published in: OASIcs, Volume 134, Companion Proceedings of the 9th International Conference on the Art, Science, and Engineering of Programming (Programming 2025)


Abstract
Guaranteeing that a compiler behaves correctly is a complex task often approached through test generation and fuzzing. Compiler test generation must not only ensure that a compiler generates code that does not break, but also that it implements the programming language semantics. Recently, interpreter-guided test generation has been proposed to test JIT compilers: Concolic-execution on the interpreter yields test cases for the language semantics which are then validated between differential testing of the interpreter and compiler. In previous work, this solution has been shown to find interpreter/compiler differences. However, little has been said about the effectiveness and the solution limits. In this paper we study the behavior of this technique, to shed light on future improvements and research. We experiment with this technique on the JIT compiler for the Pharo programming language, on two different backends: ARMv7 and x86. We explore how effective the solution is in terms of compiler coverage and its limitations, and we discuss how future research can overcome them. Moreover, we investigate how this technique combined with random constraint mutations increases backend compiler coverage.

Cite as

Federico Lochbaum and Guillermo Polito. On the Effectiveness of Interpreter-Guided Compiler Testing. In Companion Proceedings of the 9th International Conference on the Art, Science, and Engineering of Programming (Programming 2025). Open Access Series in Informatics (OASIcs), Volume 134, pp. 20:1-20:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{lochbaum_et_al:OASIcs.Programming.2025.20,
  author =	{Lochbaum, Federico and Polito, Guillermo},
  title =	{{On the Effectiveness of Interpreter-Guided Compiler Testing}},
  booktitle =	{Companion Proceedings of the 9th International Conference on the Art, Science, and Engineering of Programming (Programming 2025)},
  pages =	{20:1--20:15},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-382-9},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{134},
  editor =	{Edwards, Jonathan and Perera, Roly and Petricek, Tomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.Programming.2025.20},
  URN =		{urn:nbn:de:0030-drops-243040},
  doi =		{10.4230/OASIcs.Programming.2025.20},
  annote =	{Keywords: Virtual Machines, Concolic Testing, JIT compilers, interpreters, Differential Testing, Constraint Mutations, Compiler Coverage}
}
Document
RANDOM
What Is the Minimum Number of Random Bits Required for Computability and Efficiency in Anonymous Networks?

Authors: Dariusz R. Kowalski, Piotr Krysta, and Shay Kutten

Published in: LIPIcs, Volume 353, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2025)


Abstract
Angluin (STOC'80) and Yamashita and Kameda (PODC'88) show that some useful distributed tasks are impossible (for deterministic algorithms) in a general network if nodes do not possess unique identifiers. However, any task decidable in the non-distributed context, can be solved deterministically if the network has a unique leader. Alternatively, much research has been devoted to randomized distributed algorithms in anonymous networks. We present tight upper and lower bounds for the fundamental question: How much randomness is necessary and sufficient to solve Leader Election (LE) in anonymous networks, i.e., to transform an anonymous network into a non-anonymous one? We prove that at least one random bit per node is required in some cases. Surprisingly, a single random bit is also enough, for a total of n bits, where n is the number of nodes. However, the time complexity of our (total of) n random bits algorithm for general networks turned out to be impractically high. Hence, we also developed time-efficient algorithms for the very symmetric graphs of cliques and cycles, paying only an additional cost of o(n) random bits. The primary steps of our algorithms are of independent interest. At first glance, it seems that using one random bit per node, any algorithm can distinguish only two sets of nodes: those with 0 and those with 1. Our algorithms manage to partition the nodes into more than two sets with high probability. In some sense, they perform the task of a "distributed pseudorandom generator", for example, one of our algorithms turns n bits, one per node, into n unique (with high probability) numbers. Even though a complete graph looks very symmetric, the algorithms explore interesting asymmetries inherent in any n permutations (of n values each), if each describes the assignment (by the adversary) of ports in a node to edges leading to neighbors. Finally, we show how to transform any randomized algorithm that generates xn+o(n) random bits in total to one where each node generates at most x+1 bits. Our results apply to both synchronous and asynchronous networks.

Cite as

Dariusz R. Kowalski, Piotr Krysta, and Shay Kutten. What Is the Minimum Number of Random Bits Required for Computability and Efficiency in Anonymous Networks?. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 353, pp. 41:1-41:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{kowalski_et_al:LIPIcs.APPROX/RANDOM.2025.41,
  author =	{Kowalski, Dariusz R. and Krysta, Piotr and Kutten, Shay},
  title =	{{What Is the Minimum Number of Random Bits Required for Computability and Efficiency in Anonymous Networks?}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2025)},
  pages =	{41:1--41:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-397-3},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{353},
  editor =	{Ene, Alina and Chattopadhyay, Eshan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2025.41},
  URN =		{urn:nbn:de:0030-drops-244071},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2025.41},
  annote =	{Keywords: Distributed computability, Anonymous Networks, Randomness, Leader Election}
}
Document
Efficient Quantum Pseudorandomness from Hamiltonian Phase States

Authors: John Bostanci, Jonas Haferkamp, Dominik Hangleiter, and Alexander Poremba

Published in: LIPIcs, Volume 350, 20th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2025)


Abstract
Quantum pseudorandomness has found applications in many areas of quantum information, ranging from entanglement theory, to models of scrambling phenomena in chaotic quantum systems, and, more recently, in the foundations of quantum cryptography. Kretschmer (TQC '21) showed that both pseudorandom states and pseudorandom unitaries exist even in a world without classical one-way functions. To this day, however, all known constructions require classical cryptographic building blocks which are themselves synonymous with the existence of one-way functions, and which are also challenging to implement on realistic quantum hardware. In this work, we seek to make progress on both of these fronts simultaneously - by decoupling quantum pseudorandomness from classical cryptography altogether. We introduce a quantum hardness assumption called the Hamiltonian Phase State (HPS) problem, which is the task of decoding output states of a random instantaneous quantum polynomial-time (IQP) circuit. Hamiltonian phase states can be generated very efficiently using only Hadamard gates, single-qubit Z rotations and CNOT circuits. We show that the hardness of our problem reduces to a worst-case version of the problem, and we provide evidence that our assumption is plausibly fully quantum; meaning, it cannot be used to construct one-way functions. We also show information-theoretic hardness when only few copies of HPS are available by proving an approximate t-design property of our ensemble. Finally, we show that our HPS assumption and its variants allow us to efficiently construct many pseudorandom quantum primitives, ranging from pseudorandom states, to quantum pseudoentanglement, to pseudorandom unitaries, and even primitives such as public-key encryption with quantum keys.

Cite as

John Bostanci, Jonas Haferkamp, Dominik Hangleiter, and Alexander Poremba. Efficient Quantum Pseudorandomness from Hamiltonian Phase States. In 20th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 350, pp. 9:1-9:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bostanci_et_al:LIPIcs.TQC.2025.9,
  author =	{Bostanci, John and Haferkamp, Jonas and Hangleiter, Dominik and Poremba, Alexander},
  title =	{{Efficient Quantum Pseudorandomness from Hamiltonian Phase States}},
  booktitle =	{20th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2025)},
  pages =	{9:1--9:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-392-8},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{350},
  editor =	{Fefferman, Bill},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.TQC.2025.9},
  URN =		{urn:nbn:de:0030-drops-240586},
  doi =		{10.4230/LIPIcs.TQC.2025.9},
  annote =	{Keywords: Quantum pseudorandomness, quantum phase states, quantum cryptography}
}
Document
Assessing Map Reproducibility with Visual Question-Answering: An Empirical Evaluation

Authors: Eftychia Koukouraki, Auriol Degbelo, and Christian Kray

Published in: LIPIcs, Volume 346, 13th International Conference on Geographic Information Science (GIScience 2025)


Abstract
Reproducibility is a key principle of the modern scientific method. Maps, as an important means of communicating scientific results in GIScience and across disciplines, should be reproducible. Currently, map reproducibility assessment is done manually, which makes the assessment process tedious and time-consuming, ultimately limiting its efficiency. Hence, this work explores the extent to which Visual Question-Answering (VQA) can be used to automate some tasks relevant to map reproducibility assessment. We selected five state-of-the-art vision language models (VLMs) and followed a three-step approach to evaluate their ability to discriminate between maps and other images, interpret map content, and compare two map images using VQA. Our results show that current VLMs already possess map-reading capabilities and demonstrate understanding of spatial concepts, such as cardinal directions, geographic scope, and legend interpretation. Our paper demonstrates the potential of using VQA to support reproducibility assessment and highlights the outstanding issues that need to be addressed to achieve accurate, trustworthy map descriptions, thereby reducing the time and effort required by human evaluators.

Cite as

Eftychia Koukouraki, Auriol Degbelo, and Christian Kray. Assessing Map Reproducibility with Visual Question-Answering: An Empirical Evaluation. In 13th International Conference on Geographic Information Science (GIScience 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 346, pp. 13:1-13:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{koukouraki_et_al:LIPIcs.GIScience.2025.13,
  author =	{Koukouraki, Eftychia and Degbelo, Auriol and Kray, Christian},
  title =	{{Assessing Map Reproducibility with Visual Question-Answering: An Empirical Evaluation}},
  booktitle =	{13th International Conference on Geographic Information Science (GIScience 2025)},
  pages =	{13:1--13:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-378-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{346},
  editor =	{Sila-Nowicka, Katarzyna and Moore, Antoni and O'Sullivan, David and Adams, Benjamin and Gahegan, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.GIScience.2025.13},
  URN =		{urn:nbn:de:0030-drops-238426},
  doi =		{10.4230/LIPIcs.GIScience.2025.13},
  annote =	{Keywords: map comparison, computational reproducibility, visual question answering, large language models, GeoAI}
}
Document
Georeferencing Historical Maps at Scale

Authors: Rere-No-A-Rangi Pope and Marcus Frean

Published in: LIPIcs, Volume 346, 13th International Conference on Geographic Information Science (GIScience 2025)


Abstract
This paper presents a novel approach to automatically georeferencing historical maps using an algorithm based on salient line intersections. Our algorithm addresses the challenges inherent in linking historical map images to contemporary cadastral data, particularly those due to temporal discrepancies, cartographic distortions, and map image noise. By extracting and comparing angular relationships between cadastral features, termed monads and dyads, we establish a robust method for performing record linkage by identifying corresponding spatial patterns across disparate datasets. We employ a Bayesian framework to quantify the likelihood of dyad matches corrupted by measurement noise. The algorithm’s performance was evaluated by selecting a map image and finding putative angle correspondences from the entirety of Aotearoa New Zealand. Even when restricted to a single dyad match, >99% of candidate regions can be successfully filtered out. We discuss the implications and limitations, and suggest strategies for further enhancing the algorithm’s robustness and efficiency. Our work is motivated by previous work in the areas of critical GIS, critical cartography and spatial justice and seeks to contribute to the areas of Spatial Data Science, Historical GIS and GIScience.

Cite as

Rere-No-A-Rangi Pope and Marcus Frean. Georeferencing Historical Maps at Scale. In 13th International Conference on Geographic Information Science (GIScience 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 346, pp. 11:1-11:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{pope_et_al:LIPIcs.GIScience.2025.11,
  author =	{Pope, Rere-No-A-Rangi and Frean, Marcus},
  title =	{{Georeferencing Historical Maps at Scale}},
  booktitle =	{13th International Conference on Geographic Information Science (GIScience 2025)},
  pages =	{11:1--11:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-378-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{346},
  editor =	{Sila-Nowicka, Katarzyna and Moore, Antoni and O'Sullivan, David and Adams, Benjamin and Gahegan, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.GIScience.2025.11},
  URN =		{urn:nbn:de:0030-drops-238400},
  doi =		{10.4230/LIPIcs.GIScience.2025.11},
  annote =	{Keywords: Historical GIS, Georeferencing, Record Linkage, Spatial Data Justice}
}
Document
Haplotype-Aware Long-Read Error Correction

Authors: Parvesh Barak, Daniel Gibney, and Chirag Jain

Published in: LIPIcs, Volume 344, 25th International Conference on Algorithms for Bioinformatics (WABI 2025)


Abstract
Error correction of long reads is an important initial step in genome assembly workflows. For organisms with ploidy greater than one, it is important to preserve haplotype-specific variation during read correction. This challenge has driven the development of several haplotype-aware correction methods. However, existing methods are based on either ad-hoc heuristics or deep learning approaches. In this paper, we introduce a rigorous formulation for this problem. Our approach builds on the minimum error correction framework used in reference-based haplotype phasing. We prove that the proposed formulation for error correction of reads in de novo context, i.e., without using a reference genome, is NP-hard. To make our exact algorithm scale to large datasets, we introduce practical heuristics. Experiments using PacBio HiFi sequencing datasets from human and plant genomes show that our approach achieves accuracy comparable to state-of-the-art methods. The software is freely available at https://github.com/at-cg/HALE.

Cite as

Parvesh Barak, Daniel Gibney, and Chirag Jain. Haplotype-Aware Long-Read Error Correction. In 25th International Conference on Algorithms for Bioinformatics (WABI 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 344, pp. 4:1-4:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{barak_et_al:LIPIcs.WABI.2025.4,
  author =	{Barak, Parvesh and Gibney, Daniel and Jain, Chirag},
  title =	{{Haplotype-Aware Long-Read Error Correction}},
  booktitle =	{25th International Conference on Algorithms for Bioinformatics (WABI 2025)},
  pages =	{4:1--4:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-386-7},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{344},
  editor =	{Brejov\'{a}, Bro\v{n}a and Patro, Rob},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2025.4},
  URN =		{urn:nbn:de:0030-drops-239300},
  doi =		{10.4230/LIPIcs.WABI.2025.4},
  annote =	{Keywords: Genome assembly, phasing, clustering, overlap graph, consensus}
}
Document
Improved Algorithms for Bi-Partition Function Computation

Authors: John D. Bridgers, Jan Hoinka, S. Cenk Sahinalp, Salem Malikic, Teresa M. Przytycka, and Funda Ergun

Published in: LIPIcs, Volume 344, 25th International Conference on Algorithms for Bioinformatics (WABI 2025)


Abstract
The evolutionary history of a tumor, inferred from single-cell sequencing data, is typically represented as a tree in which each subtree corresponds to a clade of cells seeded by a specific set of mutations. Traditional methods typically identify a single most likely tree for downstream analyses, such as detecting driver mutations, studying mutation co-occurrence patterns and identifying common evolutionary trajectories. However, the reliability of such inferred trees, particularly their topology, clade composition, and mutational placements, often remains uncertain. To quantify this uncertainty, the concept of a Bi-partition Function was recently introduced, providing a probabilistic measure of how reliably a mutation seeds a given clade of cells. The single available algorithm for estimating the Bi-partition Function relies on simplifying assumptions and uses sampling for limited exploration of the tree-space. In this paper, we introduce the first exact algorithm for computing the Bi-partition Function. Our algorithm scales linearly with the number of mutations but exhibits super-exponential complexity with respect to the number of cells. Despite this complexity, it establishes crucial ground truth values, essential for accurately benchmarking and validating approximate methods. Additionally, we present a GPU-accelerated version of the available sampling-based algorithm, significantly boosting the computational performance through large-scale parallelization, enabling more accurate Bi-partition Function estimates via deeper exploration of the tree spaces. We compare our methods on synthetic datasets, demonstrating that especially when the number of mutations sufficiently exceed the number of cells, our GPU-accelerated sampling algorithm closely approximates the exact ground truth values.

Cite as

John D. Bridgers, Jan Hoinka, S. Cenk Sahinalp, Salem Malikic, Teresa M. Przytycka, and Funda Ergun. Improved Algorithms for Bi-Partition Function Computation. In 25th International Conference on Algorithms for Bioinformatics (WABI 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 344, pp. 5:1-5:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{bridgers_et_al:LIPIcs.WABI.2025.5,
  author =	{Bridgers, John D. and Hoinka, Jan and Sahinalp, S. Cenk and Malikic, Salem and Przytycka, Teresa M. and Ergun, Funda},
  title =	{{Improved Algorithms for Bi-Partition Function Computation}},
  booktitle =	{25th International Conference on Algorithms for Bioinformatics (WABI 2025)},
  pages =	{5:1--5:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-386-7},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{344},
  editor =	{Brejov\'{a}, Bro\v{n}a and Patro, Rob},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2025.5},
  URN =		{urn:nbn:de:0030-drops-239318},
  doi =		{10.4230/LIPIcs.WABI.2025.5},
  annote =	{Keywords: Tumor Evolution, Bi-partition Function, Single-Cell Sequencing, Algorithms}
}
Document
Symbolic Conflict Analysis in Pseudo-Boolean Optimization

Authors: Robert Nieuwenhuis, Albert Oliveras, Enric Rodríguez-Carbonell, and Rui Zhao

Published in: LIPIcs, Volume 341, 28th International Conference on Theory and Applications of Satisfiability Testing (SAT 2025)


Abstract
In the the last two decades, a lot of effort has been devoted to the development of satisfiability-checking tools for a variety of SAT-related problems. However, most of these tools lack optimization capabilities. That is, instead of finding any solution, one is sometimes interested in a solution that is best according to some criterion. Pseudo-Boolean solvers can be used to deal with optimization by successively solving a series of problems that contain an additional pseudo-Boolean constraint expressing that a better solution is required. A key point for the success of this simple approach is that lemmas that are learned for one problem can be reused for subsequent ones. In this paper we go one step further and show how, by using a simple symbolic conflict analysis procedure, not only can lemmas be reused between problems but also strengthened, thus further pruning the search space traversal. In addition, we show how this technique automatically allows one to infer upper bounds in maximization problems, thus giving an estimation of how far the solver is from finding an optimal solution. Experimental results with our PB solver reveal that (i) this technique is indeed effective in practice, providing important speedups in problems where several solutions are found and (ii) on problems with very few solutions, where the impact of our technique is limited, its overhead is negligible.

Cite as

Robert Nieuwenhuis, Albert Oliveras, Enric Rodríguez-Carbonell, and Rui Zhao. Symbolic Conflict Analysis in Pseudo-Boolean Optimization. In 28th International Conference on Theory and Applications of Satisfiability Testing (SAT 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 341, pp. 23:1-23:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{nieuwenhuis_et_al:LIPIcs.SAT.2025.23,
  author =	{Nieuwenhuis, Robert and Oliveras, Albert and Rodr{\'\i}guez-Carbonell, Enric and Zhao, Rui},
  title =	{{Symbolic Conflict Analysis in Pseudo-Boolean Optimization}},
  booktitle =	{28th International Conference on Theory and Applications of Satisfiability Testing (SAT 2025)},
  pages =	{23:1--23:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-381-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{341},
  editor =	{Berg, Jeremias and Nordstr\"{o}m, Jakob},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SAT.2025.23},
  URN =		{urn:nbn:de:0030-drops-237579},
  doi =		{10.4230/LIPIcs.SAT.2025.23},
  annote =	{Keywords: SAT, Pseudo-Boolean Optimization, Conflict Analysis}
}
Document
From Prediction to Precision: Leveraging LLMs for Equitable and Data-Driven Writing Placement in Developmental Education

Authors: Miguel Da Corte and Jorge Baptista

Published in: OASIcs, Volume 135, 14th Symposium on Languages, Applications and Technologies (SLATE 2025)


Abstract
Accurate text classification and placement remain challenges in U.S. higher education, with traditional automated systems like Accuplacer functioning as "black-box" models with limited assessment transparency. This study evaluates Large Language Models (LLMs) as complementary placement tools by comparing their classification performance against a human-rated gold standard and Accuplacer. A 450-essay corpus was classified using Claude, Gemini, GPT-3.5-turbo, and GPT-4o across four prompting strategies: Zero-shot, Few-shot, Enhanced, and Enhanced+ (definitions with examples). Two classification approaches were tested: (i) a 1-step, 3 class classification task, distinguishing DevEd Level 1, DevEd Level 2, and College-level texts in one single run; and (ii) a 2-step classification task, first separating College vs. Non-College texts before further classifying Non-College texts into DevEd sublevels. The results show that structured prompt refinement improves the precision of LLMs' classification, with Claude Enhanced + achieving 62.22% precision (1 step) and Gemini Enhanced + reaching 69.33% (2 step), both surpassing Accuplacer (58.22%). Gemini and Claude also demonstrated strong correlation with human ratings, with Claude achieving the highest Pearson scores (ρ = 0.75; 1-step, ρ = 0.73; 2-step) vs. Accuplacer (ρ = 0.67). While LLMs show promise for DevEd placement, their precision remains a work in progress, highlighting the need for further refinement and safeguards to ensure ethical and equitable placement.

Cite as

Miguel Da Corte and Jorge Baptista. From Prediction to Precision: Leveraging LLMs for Equitable and Data-Driven Writing Placement in Developmental Education. In 14th Symposium on Languages, Applications and Technologies (SLATE 2025). Open Access Series in Informatics (OASIcs), Volume 135, pp. 1:1-1:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{dacorte_et_al:OASIcs.SLATE.2025.1,
  author =	{Da Corte, Miguel and Baptista, Jorge},
  title =	{{From Prediction to Precision: Leveraging LLMs for Equitable and Data-Driven Writing Placement in Developmental Education}},
  booktitle =	{14th Symposium on Languages, Applications and Technologies (SLATE 2025)},
  pages =	{1:1--1:18},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-387-4},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{135},
  editor =	{Baptista, Jorge and Barateiro, Jos\'{e}},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SLATE.2025.1},
  URN =		{urn:nbn:de:0030-drops-236817},
  doi =		{10.4230/OASIcs.SLATE.2025.1},
  annote =	{Keywords: Large Language Models (LLMs), Developmental Education (DevEd), writing assessment, text classification, English writing proficiency}
}
Document
U-Index: A Universal Indexing Framework for Matching Long Patterns

Authors: Lorraine A. K. Ayad, Gabriele Fici, Ragnar Groot Koerkamp, Grigorios Loukides, Rob Patro, Giulio Ermanno Pibiri, and Solon P. Pissis

Published in: LIPIcs, Volume 338, 23rd International Symposium on Experimental Algorithms (SEA 2025)


Abstract
Motivation. Text indexing is a fundamental and well-studied problem. Classic solutions to this problem either replace the original text with a compressed representation, e.g., the FM-index and its variants, or keep it uncompressed but attach some redundancy - an index - to accelerate matching, e.g., the suffix array. The former solutions thus retain excellent compressed space, but are practically slow to construct and query. The latter approaches, instead, sacrifice space efficiency but are typically faster; for example, the suffix array takes much more space than the text itself for commonly used alphabets, like ASCII or DNA, but it is very fast to construct and query. Methods. In this paper, we show that efficient text indexing can be achieved using just a small extra space on top of the original text, provided that the query patterns are sufficiently long. More specifically, we develop a new indexing paradigm in which a sketch of a query pattern is first matched against a sketch of the text. Once candidate matches are retrieved, they are verified using the original text. This paradigm is thus universal in the sense that it allows us to use any solution to index the sketched text, like a suffix array, FM-index, or r-index. Results. We explore both the theory and the practice of this universal framework. With an extensive experimental analysis, we show that, surprisingly, universal indexes can be constructed much faster than their unsketched counterparts and take a fraction of the space, as a direct consequence of (i) having a lower bound on the length of patterns and (ii) working in sketch space. Furthermore, these data structures have the potential of retaining or even improving query time, because matching against the sketched text is faster and verifying candidates can be theoretically done in constant time per occurrence (or, in practice, by short and cache-friendly scans of the text). Finally, we discuss some important applications of this novel indexing paradigm to computational biology. We hypothesize that such indexes will be particularly effective when the queries are sufficiently long, and so we demonstrate applications in long-read mapping.

Cite as

Lorraine A. K. Ayad, Gabriele Fici, Ragnar Groot Koerkamp, Grigorios Loukides, Rob Patro, Giulio Ermanno Pibiri, and Solon P. Pissis. U-Index: A Universal Indexing Framework for Matching Long Patterns. In 23rd International Symposium on Experimental Algorithms (SEA 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 338, pp. 4:1-4:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{ayad_et_al:LIPIcs.SEA.2025.4,
  author =	{Ayad, Lorraine A. K. and Fici, Gabriele and Groot Koerkamp, Ragnar and Loukides, Grigorios and Patro, Rob and Pibiri, Giulio Ermanno and Pissis, Solon P.},
  title =	{{U-Index: A Universal Indexing Framework for Matching Long Patterns}},
  booktitle =	{23rd International Symposium on Experimental Algorithms (SEA 2025)},
  pages =	{4:1--4:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-375-1},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{338},
  editor =	{Mutzel, Petra and Prezza, Nicola},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2025.4},
  URN =		{urn:nbn:de:0030-drops-232420},
  doi =		{10.4230/LIPIcs.SEA.2025.4},
  annote =	{Keywords: Text Indexing, Sketching, Minimizers, Hashing}
}
  • Refine by Type
  • 38 Document/PDF
  • 29 Document/HTML

  • Refine by Publication Year
  • 27 2025
  • 4 2024
  • 3 2023
  • 1 2022
  • 1 2019
  • Show More...

  • Refine by Author
  • 8 Robinson, Peter
  • 3 Pandurangan, Gopal
  • 3 Pemmaraju, Sriram V.
  • 2 Chen, Jiaoyan
  • 2 Dufoulon, Fabien
  • Show More...

  • Refine by Series/Journal
  • 30 LIPIcs
  • 3 OASIcs
  • 1 LITES
  • 3 TGDK
  • 1 DagMan

  • Refine by Classification
  • 12 Theory of computation → Distributed algorithms
  • 2 Applied computing → Life and medical sciences
  • 2 Computing methodologies → Knowledge representation and reasoning
  • 2 Theory of computation
  • 2 Theory of computation → Automated reasoning
  • Show More...

  • Refine by Keyword
  • 2 Approximation Algorithms
  • 2 Distributed graph algorithm
  • 2 consensus
  • 1 (1,2)-TSP
  • 1 Algorithms
  • Show More...

Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail