21 Search Results for "Wang, Kai"


Document
RANDOM
Matrix Multiplication Verification Using Coding Theory

Authors: Huck Bennett, Karthik Gajulapalli, Alexander Golovnev, and Evelyn Warton

Published in: LIPIcs, Volume 317, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)


Abstract
We study the Matrix Multiplication Verification Problem (MMV) where the goal is, given three n × n matrices A, B, and C as input, to decide whether AB = C. A classic randomized algorithm by Freivalds (MFCS, 1979) solves MMV in Õ(n²) time, and a longstanding challenge is to (partially) derandomize it while still running in faster than matrix multiplication time (i.e., in o(n^ω) time). To that end, we give two algorithms for MMV in the case where AB - C is sparse. Specifically, when AB - C has at most O(n^δ) non-zero entries for a constant 0 ≤ δ < 2, we give (1) a deterministic O(n^(ω-ε))-time algorithm for constant ε = ε(δ) > 0, and (2) a randomized Õ(n²)-time algorithm using δ/2 ⋅ log₂ n + O(1) random bits. The former algorithm is faster than the deterministic algorithm of Künnemann (ESA, 2018) when δ ≥ 1.056, and the latter algorithm uses fewer random bits than the algorithm of Kimbrel and Sinha (IPL, 1993), which runs in the same time and uses log₂ n + O(1) random bits (in turn fewer than Freivalds’s algorithm). Our algorithms are simple and use techniques from coding theory. Let H be a parity-check matrix of a Maximum Distance Separable (MDS) code, and let G = (I | G') be a generator matrix of a (possibly different) MDS code in systematic form. Our deterministic algorithm uses fast rectangular matrix multiplication to check whether HAB = HC and H(AB)^T = H(C^T), and our randomized algorithm samples a uniformly random row g' from G' and checks whether g'AB = g'C and g'(AB)^T = g'C^T. We additionally study the complexity of MMV. We first show that all algorithms in a natural class of deterministic linear algebraic algorithms for MMV (including ours) require Ω(n^ω) time. We also show a barrier to proving a super-quadratic running time lower bound for matrix multiplication (and hence MMV) under the Strong Exponential Time Hypothesis (SETH). Finally, we study relationships between natural variants and special cases of MMV (with respect to deterministic Õ(n²)-time reductions).

Cite as

Huck Bennett, Karthik Gajulapalli, Alexander Golovnev, and Evelyn Warton. Matrix Multiplication Verification Using Coding Theory. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 317, pp. 42:1-42:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bennett_et_al:LIPIcs.APPROX/RANDOM.2024.42,
  author =	{Bennett, Huck and Gajulapalli, Karthik and Golovnev, Alexander and Warton, Evelyn},
  title =	{{Matrix Multiplication Verification Using Coding Theory}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2024)},
  pages =	{42:1--42:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-348-5},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{317},
  editor =	{Kumar, Amit and Ron-Zewi, Noga},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2024.42},
  URN =		{urn:nbn:de:0030-drops-210352},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2024.42},
  annote =	{Keywords: Matrix Multiplication Verification, Derandomization, Sparse Matrices, Error-Correcting Codes, Hardness Barriers, Reductions}
}
Document
CFT-Forensics: High-Performance Byzantine Accountability for Crash Fault Tolerant Protocols

Authors: Weizhao Tang, Peiyao Sheng, Ronghao Ni, Pronoy Roy, Xuechao Wang, Giulia Fanti, and Pramod Viswanath

Published in: LIPIcs, Volume 316, 6th Conference on Advances in Financial Technologies (AFT 2024)


Abstract
Crash fault tolerant (CFT) consensus algorithms are commonly used in scenarios where system components are trusted - e.g., enterprise settings and government infrastructure. However, CFT consensus can be broken by even a single corrupt node. A desirable property in the face of such potential Byzantine faults is accountability: if a corrupt node breaks the protocol and affects consensus safety, it should be possible to identify the culpable components with cryptographic integrity from the node states. Today, the best-known protocol for providing accountability to CFT protocols is called PeerReview; it essentially records a signed transcript of all messages sent during the CFT protocol. Because PeerReview is agnostic to the underlying CFT protocol, it incurs high communication and storage overhead. We propose CFT-Forensics, an accountability framework for CFT protocols. We show that for a special family of forensics-compliant CFT protocols (which includes widely-used CFT protocols like Raft and multi-Paxos), CFT-Forensics gives provable accountability guarantees. Under realistic deployment settings, we show theoretically that CFT-Forensics operates at a fraction of the cost of PeerReview. We subsequently instantiate CFT-Forensics for Raft, and implement Raft-Forensics as an extension to the popular nuRaft library. In extensive experiments, we demonstrate that Raft-Forensics adds low overhead to vanilla Raft. With 256 byte messages, Raft-Forensics achieves a peak throughput 87.8% of vanilla Raft at 46% higher latency (+44 ms). We finally integrate Raft-Forensics into the open-source central bank digital currency OpenCBDC, and show that in wide-area network experiments, Raft-Forensics achieves 97.8% of the throughput of Raft, with 14.5% higher latency (+326 ms).

Cite as

Weizhao Tang, Peiyao Sheng, Ronghao Ni, Pronoy Roy, Xuechao Wang, Giulia Fanti, and Pramod Viswanath. CFT-Forensics: High-Performance Byzantine Accountability for Crash Fault Tolerant Protocols. In 6th Conference on Advances in Financial Technologies (AFT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 316, pp. 3:1-3:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{tang_et_al:LIPIcs.AFT.2024.3,
  author =	{Tang, Weizhao and Sheng, Peiyao and Ni, Ronghao and Roy, Pronoy and Wang, Xuechao and Fanti, Giulia and Viswanath, Pramod},
  title =	{{CFT-Forensics: High-Performance Byzantine Accountability for Crash Fault Tolerant Protocols}},
  booktitle =	{6th Conference on Advances in Financial Technologies (AFT 2024)},
  pages =	{3:1--3:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-345-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{316},
  editor =	{B\"{o}hme, Rainer and Kiffer, Lucianna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2024.3},
  URN =		{urn:nbn:de:0030-drops-209399},
  doi =		{10.4230/LIPIcs.AFT.2024.3},
  annote =	{Keywords: CFT Protocols, forensics, blockchain}
}
Document
DeFiAligner: Leveraging Symbolic Analysis and Large Language Models for Inconsistency Detection in Decentralized Finance

Authors: Rundong Gan, Liyi Zhou, Le Wang, Kaihua Qin, and Xiaodong Lin

Published in: LIPIcs, Volume 316, 6th Conference on Advances in Financial Technologies (AFT 2024)


Abstract
Decentralized Finance (DeFi) has witnessed a monumental surge, reaching 53.039 billion USD in total value locked. As this sector continues to expand, ensuring the reliability of DeFi smart contracts becomes increasingly crucial. While some users are adept at reading code or the compiled bytecode to understand smart contracts, many rely on documentation. Therefore, discrepancies between the documentation and the deployed code can pose significant risks, whether these discrepancies are due to errors or intentional fraud. To tackle these challenges, we developed DeFiAligner, an end-to-end system to identify inconsistencies between documentation and smart contracts. DeFiAligner incorporates a symbolic execution tool, SEVM, which explores execution paths of on-chain binary code, recording memory and stack states. It automatically generates symbolic expressions for token balance changes and branch conditions, which, along with related project documents, are processed by LLMs. Using structured prompts, the LLMs evaluate the alignment between the symbolic expressions and the documentation. Our tests across three distinct scenarios demonstrate DeFiAligner’s capability to automate inconsistency detection in DeFi, achieving recall rates of 92% and 90% on two public datasets respectively.

Cite as

Rundong Gan, Liyi Zhou, Le Wang, Kaihua Qin, and Xiaodong Lin. DeFiAligner: Leveraging Symbolic Analysis and Large Language Models for Inconsistency Detection in Decentralized Finance. In 6th Conference on Advances in Financial Technologies (AFT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 316, pp. 7:1-7:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{gan_et_al:LIPIcs.AFT.2024.7,
  author =	{Gan, Rundong and Zhou, Liyi and Wang, Le and Qin, Kaihua and Lin, Xiaodong},
  title =	{{DeFiAligner: Leveraging Symbolic Analysis and Large Language Models for Inconsistency Detection in Decentralized Finance}},
  booktitle =	{6th Conference on Advances in Financial Technologies (AFT 2024)},
  pages =	{7:1--7:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-345-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{316},
  editor =	{B\"{o}hme, Rainer and Kiffer, Lucianna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2024.7},
  URN =		{urn:nbn:de:0030-drops-209431},
  doi =		{10.4230/LIPIcs.AFT.2024.7},
  annote =	{Keywords: Decentralized Finance Security, Large Language Models, Project Review, Symbolic Analysis, Smart Contracts}
}
Document
Privacy Comparison for Bitcoin Light Client Implementations

Authors: Arad Kotzer and Ori Rottenstreich

Published in: LIPIcs, Volume 316, 6th Conference on Advances in Financial Technologies (AFT 2024)


Abstract
Light clients implement a simple solution for Bitcoin’s scalability problem, as they do not store the entire blockchain but only the state of particular addresses of interest. To be able to keep track of the updated state of their addresses, light clients rely on full nodes to provide them with the required information. To do so, they must reveal information about the addresses they are interested in. This paper studies the two most common light client implementations, SPV and Neutrino with regards to their privacy. We define privacy metrics for comparing the privacy of the different implementations. We evaluate and compare the privacy of the implementations over time on real Bitcoin data and discuss the inherent privacy-communication tradeoff. In addition, we propose general techniques to enhance light client privacy in the existing implementations. Finally, we propose a new SPV-based light client model, the aggregation model, evaluate it, and show it can achieve enhanced privacy than in the existing light client implementations.

Cite as

Arad Kotzer and Ori Rottenstreich. Privacy Comparison for Bitcoin Light Client Implementations. In 6th Conference on Advances in Financial Technologies (AFT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 316, pp. 15:1-15:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{kotzer_et_al:LIPIcs.AFT.2024.15,
  author =	{Kotzer, Arad and Rottenstreich, Ori},
  title =	{{Privacy Comparison for Bitcoin Light Client Implementations}},
  booktitle =	{6th Conference on Advances in Financial Technologies (AFT 2024)},
  pages =	{15:1--15:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-345-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{316},
  editor =	{B\"{o}hme, Rainer and Kiffer, Lucianna},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2024.15},
  URN =		{urn:nbn:de:0030-drops-209510},
  doi =		{10.4230/LIPIcs.AFT.2024.15},
  annote =	{Keywords: Blockchain, Privacy, Light Clients, Bloom filter}
}
Document
Java Bytecode Normalization for Code Similarity Analysis

Authors: Stefan Schott, Serena Elisa Ponta, Wolfram Fischer, Jonas Klauke, and Eric Bodden

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
Analyzing the similarity of two code fragments has many applications, including code clone, vulnerability and plagiarism detection. Most existing approaches for similarity analysis work on source code. However, in scenarios like plagiarism detection, copyright violation detection or Software Bill of Materials creation source code is often not available and thus similarity analysis has to be performed on binary formats. Java bytecode is a binary format executable by the Java Virtual Machine and obtained from the compilation of Java source code. Performing similarity detection on bytecode is challenging because different compilers can compile the same source code to syntactically vastly different bytecode. In this work we assess to what extent one can nonetheless enable similarity detection by bytecode normalization, a procedure to transform Java bytecode into a representation that is identical for the same original source code, irrespective of the Java compiler and Java version used during compilation. Our manual study revealed 16 classes of compilation differences that various compilation environments may induce. Based on these findings, we implemented bytecode normalization in a tool jNorm. It uses Jimple as intermediate representation, applies common code optimizations and transforms all classes of compilation difference to a normalized form, thus achieving a representation of the bytecode that is identical despite different compilation environments. Our evaluation, performed on more than 300 popular Java projects, shows that solely the act of incrementing a compiler version may cause differences in 46% of all resulting bytecode files. By applying bytecode normalization, one can remove more than 99% of these differences, thus acting as a crucial enabler for subsequent applications of bytecode similarity analysis.

Cite as

Stefan Schott, Serena Elisa Ponta, Wolfram Fischer, Jonas Klauke, and Eric Bodden. Java Bytecode Normalization for Code Similarity Analysis. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 37:1-37:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{schott_et_al:LIPIcs.ECOOP.2024.37,
  author =	{Schott, Stefan and Ponta, Serena Elisa and Fischer, Wolfram and Klauke, Jonas and Bodden, Eric},
  title =	{{Java Bytecode Normalization for Code Similarity Analysis}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{37:1--37:29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.37},
  URN =		{urn:nbn:de:0030-drops-208865},
  doi =		{10.4230/LIPIcs.ECOOP.2024.37},
  annote =	{Keywords: Bytecode, Java Compiler, Code Similarity Analysis}
}
Document
Failure Transparency in Stateful Dataflow Systems

Authors: Aleksey Veresov, Jonas Spenger, Paris Carbone, and Philipp Haller

Published in: LIPIcs, Volume 313, 38th European Conference on Object-Oriented Programming (ECOOP 2024)


Abstract
Failure transparency enables users to reason about distributed systems at a higher level of abstraction, where complex failure-handling logic is hidden. This is especially true for stateful dataflow systems, which are the backbone of many cloud applications. In particular, this paper focuses on proving failure transparency in Apache Flink, a popular stateful dataflow system. Even though failure transparency is a critical aspect of Apache Flink, to date it has not been formally proven. Showing that the failure transparency mechanism is correct, however, is challenging due to the complexity of the mechanism itself. Nevertheless, this complexity can be effectively hidden behind a failure transparent programming interface. To show that Apache Flink is failure transparent, we model it in small-step operational semantics. Next, we provide a novel definition of failure transparency based on observational explainability, a concept which relates executions according to their observations. Finally, we provide a formal proof of failure transparency for the implementation model; i.e., we prove that the failure-free model correctly abstracts from the failure-related details of the implementation model. We also show liveness of the implementation model under a fair execution assumption. These results are a first step towards a verified stack for stateful dataflow systems.

Cite as

Aleksey Veresov, Jonas Spenger, Paris Carbone, and Philipp Haller. Failure Transparency in Stateful Dataflow Systems. In 38th European Conference on Object-Oriented Programming (ECOOP 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 313, pp. 42:1-42:31, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{veresov_et_al:LIPIcs.ECOOP.2024.42,
  author =	{Veresov, Aleksey and Spenger, Jonas and Carbone, Paris and Haller, Philipp},
  title =	{{Failure Transparency in Stateful Dataflow Systems}},
  booktitle =	{38th European Conference on Object-Oriented Programming (ECOOP 2024)},
  pages =	{42:1--42:31},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-341-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{313},
  editor =	{Aldrich, Jonathan and Salvaneschi, Guido},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ECOOP.2024.42},
  URN =		{urn:nbn:de:0030-drops-208911},
  doi =		{10.4230/LIPIcs.ECOOP.2024.42},
  annote =	{Keywords: Failure transparency, stateful dataflow, operational semantics, checkpoint recovery}
}
Document
Wayfinding Stages: The Role of Familiarity, Gaze Events, and Visual Attention

Authors: Negar Alinaghi and Ioannis Giannopoulos

Published in: LIPIcs, Volume 315, 16th International Conference on Spatial Information Theory (COSIT 2024)


Abstract
Understanding the cognitive processes involved in wayfinding is crucial for both theoretical advances and practical applications in navigation systems development. This study explores how gaze behavior and visual attention contribute to our understanding of cognitive states during wayfinding. Based on the model proposed by Downs and Stea, which segments wayfinding into four distinct stages: self-localization, route planning, monitoring, and goal recognition, we conducted an outdoor wayfinding experiment with 56 participants. Given the significant role of spatial familiarity in wayfinding behavior, each participant navigated six different routes in both familiar and unfamiliar environments, with their eye movements being recorded. We provide a detailed examination of participants' gaze behavior and the actual objects of focus. Our findings reveal distinct gaze behavior patterns and visual attention, differentiating wayfinding stages while emphasizing the impact of spatial familiarity. This examination of visual engagement during wayfinding explains adaptive cognitive processes, demonstrating how familiarity influences navigation strategies. The results enhance our theoretical understanding of wayfinding and offer practical insights for developing navigation aids capable of predicting different wayfinding stages.

Cite as

Negar Alinaghi and Ioannis Giannopoulos. Wayfinding Stages: The Role of Familiarity, Gaze Events, and Visual Attention. In 16th International Conference on Spatial Information Theory (COSIT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 315, pp. 1:1-1:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{alinaghi_et_al:LIPIcs.COSIT.2024.1,
  author =	{Alinaghi, Negar and Giannopoulos, Ioannis},
  title =	{{Wayfinding Stages: The Role of Familiarity, Gaze Events, and Visual Attention}},
  booktitle =	{16th International Conference on Spatial Information Theory (COSIT 2024)},
  pages =	{1:1--1:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-330-0},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{315},
  editor =	{Adams, Benjamin and Griffin, Amy L. and Scheider, Simon and McKenzie, Grant},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.COSIT.2024.1},
  URN =		{urn:nbn:de:0030-drops-208161},
  doi =		{10.4230/LIPIcs.COSIT.2024.1},
  annote =	{Keywords: Eye-tracking, Wayfinding, Spatial Familiarity, Visual Attention, Gaze Behavior}
}
Document
Spatial Nudging: Converging Persuasive Technologies, Spatial Design, and Behavioral Theories

Authors: Ayda Grisiute and Martin Raubal

Published in: LIPIcs, Volume 315, 16th International Conference on Spatial Information Theory (COSIT 2024)


Abstract
This paper presents the Spatial Nudging framework - a theory-based framework that maps out nudging strategies in the mobility domain and refines its existing definitions. We link these strategies by highlighting the role of perceived affordances across physical and digital interventions based on the Nudge Theory and the Theory of Affordances. Furthermore, we propose to use graph representation techniques as a supportive methodology to better align perceived and actual environments, thereby enhancing the intervention strategies' effectiveness. We illustrate the applicability of the Spatial Nudging framework and the supportive methodology in the context of an E-bike City vision. This paper lays the foundation for future research on theoretically integrating physical and digital interventions to promote sustainable mobility.

Cite as

Ayda Grisiute and Martin Raubal. Spatial Nudging: Converging Persuasive Technologies, Spatial Design, and Behavioral Theories. In 16th International Conference on Spatial Information Theory (COSIT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 315, pp. 5:1-5:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{grisiute_et_al:LIPIcs.COSIT.2024.5,
  author =	{Grisiute, Ayda and Raubal, Martin},
  title =	{{Spatial Nudging: Converging Persuasive Technologies, Spatial Design, and Behavioral Theories}},
  booktitle =	{16th International Conference on Spatial Information Theory (COSIT 2024)},
  pages =	{5:1--5:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-330-0},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{315},
  editor =	{Adams, Benjamin and Griffin, Amy L. and Scheider, Simon and McKenzie, Grant},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.COSIT.2024.5},
  URN =		{urn:nbn:de:0030-drops-208206},
  doi =		{10.4230/LIPIcs.COSIT.2024.5},
  annote =	{Keywords: spatial nudging, active mobility, Nudge Theory, Theory of Affordances, cognitive graphs}
}
Document
Is Familiarity Reflected in the Spatial Knowledge Revealed by Sketch Maps?

Authors: Markus Kattenbeck, Daniel R. Montello, Martin Raubal, and Ioannis Giannopoulos

Published in: LIPIcs, Volume 315, 16th International Conference on Spatial Information Theory (COSIT 2024)


Abstract
Despite the frequent use of sketch maps in assessing environmental knowledge, it remains unclear how and to what degree familiarity impacts sketch map content. In the present study, we assess whether different levels of familiarity relate to differences in the content and spatial accuracy of environmental knowledge depicted in sketch maps drawn for the purpose of route instructions. To this end, we conduct a real-world wayfinding study with 91 participants, all of whom have to walk along a pre-defined route of approximately 2.3km length. Prior to the walk, we collect self-report familiarity ratings from participants for both a set of 15 landmarks and a set of areas we define as hexagons along the route. Once participants finished walking the route, they were asked to sketch a map of the route, specifically a sketch that would enable a person who had never walked the route to follow it. We found that participants unfamiliar with the areas along the route sketched fewer features than familiar people did. Contrary to our expectations, however, we found that landmarks were sketched or not regardless of participants' level of familiarity with the landmarks. We were also surprised that the level of familiarity was not correlated to the accuracy of the sketched order of features along the route, of the position of sketched features in relation to the route, nor to the metric locational accuracy of feature placement on the sketches. These results lead us to conclude that different aspects of feature salience influence whether the features are included on sketch maps, independent of familiarity. They also point to the influence of task context on the content of sketch maps, again independent of familiarity. We propose further studies to more fully explore these ideas.

Cite as

Markus Kattenbeck, Daniel R. Montello, Martin Raubal, and Ioannis Giannopoulos. Is Familiarity Reflected in the Spatial Knowledge Revealed by Sketch Maps?. In 16th International Conference on Spatial Information Theory (COSIT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 315, pp. 6:1-6:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{kattenbeck_et_al:LIPIcs.COSIT.2024.6,
  author =	{Kattenbeck, Markus and Montello, Daniel R. and Raubal, Martin and Giannopoulos, Ioannis},
  title =	{{Is Familiarity Reflected in the Spatial Knowledge Revealed by Sketch Maps?}},
  booktitle =	{16th International Conference on Spatial Information Theory (COSIT 2024)},
  pages =	{6:1--6:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-330-0},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{315},
  editor =	{Adams, Benjamin and Griffin, Amy L. and Scheider, Simon and McKenzie, Grant},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.COSIT.2024.6},
  URN =		{urn:nbn:de:0030-drops-208215},
  doi =		{10.4230/LIPIcs.COSIT.2024.6},
  annote =	{Keywords: Familiarity, Spatial Knowledge, Sketch Maps}
}
Document
Revealing Differences in Public Transport Share Through District-Wise Comparison and Relating Them to Network Properties

Authors: Manuela Canestrini, Ioanna Gogousou, Dimitrios Michail, and Ioannis Giannopoulos

Published in: LIPIcs, Volume 315, 16th International Conference on Spatial Information Theory (COSIT 2024)


Abstract
Sustainable transport is becoming an increasingly pressing issue, with two major pillars being the reduction of car usage and the promotion of public transport. One way to approach both of these pillars is through the large number of daily commute trips in urban areas, and their modal split. Previous research gathered knowledge on influencing factors on the modal split mainly through travel surveys. We take a different approach by analysing the "raw" network and the time-optimised trips on a multi-modal graph. For the case study of Vienna, Austria we investigate how the option to use a private car influences the modal split of routes towards the city centre. Additionally, we compare the modal split across seven inner districts and we relate properties of the public transport network to the respective share of public transport. The results suggest that different districts have varying options of public transport connections towards the city centre, with a share of public transport between about 5% up to a share of 45%. This reveals areas where investments in public transport could reduce commute times to the city centre. Regarding network properties, our findings suggest, that it is not sufficient to analyse the joint public transport network. Instead, individual public transport modalities should be examined. We show that the network length and the direction of the lines towards the city centre influence the proportion of subway and tram in the modal split.

Cite as

Manuela Canestrini, Ioanna Gogousou, Dimitrios Michail, and Ioannis Giannopoulos. Revealing Differences in Public Transport Share Through District-Wise Comparison and Relating Them to Network Properties. In 16th International Conference on Spatial Information Theory (COSIT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 315, pp. 10:1-10:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{canestrini_et_al:LIPIcs.COSIT.2024.10,
  author =	{Canestrini, Manuela and Gogousou, Ioanna and Michail, Dimitrios and Giannopoulos, Ioannis},
  title =	{{Revealing Differences in Public Transport Share Through District-Wise Comparison and Relating Them to Network Properties}},
  booktitle =	{16th International Conference on Spatial Information Theory (COSIT 2024)},
  pages =	{10:1--10:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-330-0},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{315},
  editor =	{Adams, Benjamin and Griffin, Amy L. and Scheider, Simon and McKenzie, Grant},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.COSIT.2024.10},
  URN =		{urn:nbn:de:0030-drops-208255},
  doi =		{10.4230/LIPIcs.COSIT.2024.10},
  annote =	{Keywords: Mobility, Modal Split, Transportation Networks}
}
Document
Short Paper
Towards Formalizing Concept Drift and Its Variants: A Case Study Using Past COSIT Proceedings (Short Paper)

Authors: Meilin Shi, Krzysztof Janowicz, Zilong Liu, and Kitty Currier

Published in: LIPIcs, Volume 315, 16th International Conference on Spatial Information Theory (COSIT 2024)


Abstract
In the classic Philosophical Investigations, Ludwig Wittgenstein suggests that the meaning of words is rooted in their use in ordinary language, challenging the idea of fixed rules determining the meaning of words. Likewise, we believe that the meaning of keywords and concepts in academic papers is shaped by their usage within the articles and evolves as research progresses. For example, the terms natural hazards and natural disasters were once used interchangeably, but this is rarely the case today. When searching for archived documents, such as those related to disaster relief, choosing the appropriate keyword is crucial and requires a deeper understanding of the historical context. To improve interoperability and promote reusability from a Research Data Management (RDM) perspective, we examine the dynamic nature of concepts, providing formal definitions of concept drift and its variants. By employing a case study of past COSIT (Conference on Spatial Information Theory) proceedings to support these definitions, we argue that a quantitative formalization can help systematically detect subsequent changes and enhance the overall interpretation of concepts.

Cite as

Meilin Shi, Krzysztof Janowicz, Zilong Liu, and Kitty Currier. Towards Formalizing Concept Drift and Its Variants: A Case Study Using Past COSIT Proceedings (Short Paper). In 16th International Conference on Spatial Information Theory (COSIT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 315, pp. 23:1-23:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{shi_et_al:LIPIcs.COSIT.2024.23,
  author =	{Shi, Meilin and Janowicz, Krzysztof and Liu, Zilong and Currier, Kitty},
  title =	{{Towards Formalizing Concept Drift and Its Variants: A Case Study Using Past COSIT Proceedings}},
  booktitle =	{16th International Conference on Spatial Information Theory (COSIT 2024)},
  pages =	{23:1--23:8},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-330-0},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{315},
  editor =	{Adams, Benjamin and Griffin, Amy L. and Scheider, Simon and McKenzie, Grant},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.COSIT.2024.23},
  URN =		{urn:nbn:de:0030-drops-208386},
  doi =		{10.4230/LIPIcs.COSIT.2024.23},
  annote =	{Keywords: Concept Drift, Semantic Aging, Research Data Management}
}
Document
Short Paper
Large Language Models: Testing Their Capabilities to Understand and Explain Spatial Concepts (Short Paper)

Authors: Majid Hojati and Rob Feick

Published in: LIPIcs, Volume 315, 16th International Conference on Spatial Information Theory (COSIT 2024)


Abstract
Interest in applying Large Language Models (LLMs), which use natural language processing (NLP) to provide human-like responses to text-based questions, to geospatial tasks has grown rapidly. Research shows that LLMs can help generate software code and answer some types of geographic questions to varying degrees even without fine-tuning. However, further research is required to explore the types of spatial questions they answer correctly, their abilities to apply spatial reasoning, and the variability between models. In this paper we examine the ability of four LLM models (GPT3.5 and 4, LLAma2.0, Falcon40B) to answer spatial questions that range from basic calculations to more advanced geographic concepts. The intent of this comparison is twofold. First, we demonstrate an extensible method for evaluating LLM’s limitations to supporting spatial data science through correct calculations and code generation. Relatedly, we also consider how these models can aid geospatial learning by providing text-based explanations of spatial concepts and operations. Our research shows common strengths in more basic types of questions, and mixed results for questions relating to more advanced spatial concepts. These results provide insights that may be used to inform strategies for testing and fine-tuning these models to increase their understanding of key spatial concepts.

Cite as

Majid Hojati and Rob Feick. Large Language Models: Testing Their Capabilities to Understand and Explain Spatial Concepts (Short Paper). In 16th International Conference on Spatial Information Theory (COSIT 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 315, pp. 31:1-31:9, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{hojati_et_al:LIPIcs.COSIT.2024.31,
  author =	{Hojati, Majid and Feick, Rob},
  title =	{{Large Language Models: Testing Their Capabilities to Understand and Explain Spatial Concepts}},
  booktitle =	{16th International Conference on Spatial Information Theory (COSIT 2024)},
  pages =	{31:1--31:9},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-330-0},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{315},
  editor =	{Adams, Benjamin and Griffin, Amy L. and Scheider, Simon and McKenzie, Grant},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.COSIT.2024.31},
  URN =		{urn:nbn:de:0030-drops-208460},
  doi =		{10.4230/LIPIcs.COSIT.2024.31},
  annote =	{Keywords: Geospatial concepts, Large Language Models, LLM, GPT, Llama, Falcon}
}
Document
Effect Semantics for Quantum Process Calculi

Authors: Lorenzo Ceragioli, Fabio Gadducci, Giuseppe Lomurno, and Gabriele Tedeschi

Published in: LIPIcs, Volume 311, 35th International Conference on Concurrency Theory (CONCUR 2024)


Abstract
The development of quantum communication protocols sparked the interest in quantum extensions of process calculi and behavioural equivalences, but defining a bisimilarity that matches the observational properties of a quantum-capable system is a surprisingly difficult task. The two proposals explicitly addressing this issue, qCCS and lqCCS, do not define an algorithmic verification scheme: the bisimilarity of two processes is proven by comparing their behaviour under all input states. We introduce a new semantic model based on effects, i.e. probabilistic predicates on quantum states that represent their observable properties. We define and investigate the properties of effect distributions and effect labelled transition systems (eLTSs), generalizing probability distributions and probabilistic labelled transition systems (pLTSs), respectively. As a proof of concept, we provide an eLTS-based semantics for a minimal quantum process algebra, which we prove sound and complete with respect to the observable probabilistic behaviour of quantum processes. To the best of our knowledge, ours is the first algorithmically verifiable proposal that abides to the properties of quantum theory.

Cite as

Lorenzo Ceragioli, Fabio Gadducci, Giuseppe Lomurno, and Gabriele Tedeschi. Effect Semantics for Quantum Process Calculi. In 35th International Conference on Concurrency Theory (CONCUR 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 311, pp. 16:1-16:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{ceragioli_et_al:LIPIcs.CONCUR.2024.16,
  author =	{Ceragioli, Lorenzo and Gadducci, Fabio and Lomurno, Giuseppe and Tedeschi, Gabriele},
  title =	{{Effect Semantics for Quantum Process Calculi}},
  booktitle =	{35th International Conference on Concurrency Theory (CONCUR 2024)},
  pages =	{16:1--16:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-339-3},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{311},
  editor =	{Majumdar, Rupak and Silva, Alexandra},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2024.16},
  URN =		{urn:nbn:de:0030-drops-207886},
  doi =		{10.4230/LIPIcs.CONCUR.2024.16},
  annote =	{Keywords: Quantum process calculi, probabilistic LTSs, effect LTSs}
}
Document
Bit-Array-Based Alternatives to HyperLogLog

Authors: Svante Janson, Jérémie Lumbroso, and Robert Sedgewick

Published in: LIPIcs, Volume 302, 35th International Conference on Probabilistic, Combinatorial and Asymptotic Methods for the Analysis of Algorithms (AofA 2024)


Abstract
We present a family of algorithms for the problem of estimating the number of distinct items in an input stream that are simple to implement and are appropriate for practical applications. Our algorithms are a logical extension of the series of algorithms developed by Flajolet and his coauthors starting in 1983 that culminated in the widely used HyperLogLog algorithm. These algorithms divide the input stream into M substreams and lead to a time-accuracy tradeoff where a constant number of bits per substream are saved to achieve a relative accuracy proportional to 1/√M. Our algorithms use just one or two bits per substream. Their effectiveness is demonstrated by a proof of approximate normality, with explicit expressions for standard errors that inform parameter settings and allow proper quantitative comparisons with other methods. Hypotheses about performance are validated through experiments using a realistic input stream, with the conclusion that our algorithms are more accurate than HyperLogLog when using the same amount of memory, and they use two-thirds as much memory as HyperLogLog to achieve a given accuracy.

Cite as

Svante Janson, Jérémie Lumbroso, and Robert Sedgewick. Bit-Array-Based Alternatives to HyperLogLog. In 35th International Conference on Probabilistic, Combinatorial and Asymptotic Methods for the Analysis of Algorithms (AofA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 302, pp. 5:1-5:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{janson_et_al:LIPIcs.AofA.2024.5,
  author =	{Janson, Svante and Lumbroso, J\'{e}r\'{e}mie and Sedgewick, Robert},
  title =	{{Bit-Array-Based Alternatives to HyperLogLog}},
  booktitle =	{35th International Conference on Probabilistic, Combinatorial and Asymptotic Methods for the Analysis of Algorithms (AofA 2024)},
  pages =	{5:1--5:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-329-4},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{302},
  editor =	{Mailler, C\'{e}cile and Wild, Sebastian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AofA.2024.5},
  URN =		{urn:nbn:de:0030-drops-204402},
  doi =		{10.4230/LIPIcs.AofA.2024.5},
  annote =	{Keywords: Cardinality estimation, sketching, Hyperloglog}
}
Document
Improved Cut Strategy for Tensor Network Contraction Orders

Authors: Christoph Staudt, Mark Blacher, Julien Klaus, Farin Lippmann, and Joachim Giesen

Published in: LIPIcs, Volume 301, 22nd International Symposium on Experimental Algorithms (SEA 2024)


Abstract
In the field of quantum computing, simulating quantum systems on classical computers is crucial. Tensor networks are fundamental in simulating quantum systems. A tensor network is a collection of tensors, that need to be contracted into a result tensor. Tensor contraction is a generalization of matrix multiplication to higher order tensors. The contractions can be performed in different orders, and the order has a significant impact on the number of floating point operations (flops) needed to get the result tensor. It is known that finding an optimal contraction order is NP-hard. The current state-of-the-art approach for finding efficient contraction orders is to combinine graph partitioning with a greedy strategy. Although heavily used in practice, the current approach ignores so-called free indices, chooses node weights without regarding previous computations, and requires numerous hyperparameters that need to be tuned at runtime. In this paper, we address these shortcomings by developing a novel graph cut strategy. The proposed modifications yield contraction orders that significantly reduce the number of flops in the tensor contractions compared to the current state of the art. Moreover, by removing the need for hyperparameter tuning at runtime, our approach converges to an efficient solution faster, which reduces the required optimization time by at least an order of magnitude.

Cite as

Christoph Staudt, Mark Blacher, Julien Klaus, Farin Lippmann, and Joachim Giesen. Improved Cut Strategy for Tensor Network Contraction Orders. In 22nd International Symposium on Experimental Algorithms (SEA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 301, pp. 27:1-27:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{staudt_et_al:LIPIcs.SEA.2024.27,
  author =	{Staudt, Christoph and Blacher, Mark and Klaus, Julien and Lippmann, Farin and Giesen, Joachim},
  title =	{{Improved Cut Strategy for Tensor Network Contraction Orders}},
  booktitle =	{22nd International Symposium on Experimental Algorithms (SEA 2024)},
  pages =	{27:1--27:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-325-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{301},
  editor =	{Liberti, Leo},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SEA.2024.27},
  URN =		{urn:nbn:de:0030-drops-203924},
  doi =		{10.4230/LIPIcs.SEA.2024.27},
  annote =	{Keywords: tensor network, contraction order, graph partitioniong, quantum simulation}
}
  • Refine by Author
  • 3 Giannopoulos, Ioannis
  • 2 Raubal, Martin
  • 2 Wang, Kai
  • 1 Alinaghi, Negar
  • 1 Bennett, Huck
  • Show More...

  • Refine by Classification
  • 2 Applied computing → Psychology
  • 2 Computing methodologies → Knowledge representation and reasoning
  • 2 General and reference → Empirical studies
  • 2 Security and privacy → Distributed systems security
  • 2 Theory of computation → Operational semantics
  • Show More...

  • Refine by Keyword
  • 2 Large Language Models
  • 1 Applications of logics
  • 1 Approximation Algorithm
  • 1 Blockchain
  • 1 Bloom filter
  • Show More...

  • Refine by Type
  • 21 document

  • Refine by Publication Year
  • 19 2024
  • 1 2017
  • 1 2021

Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail