61 Search Results for "Shi, Jonathan"


Document
Characterizing Off-Chain Influence Proof Transaction Fee Mechanisms

Authors: Aadityan Ganesh, Clayton Thomas, and S. Matthew Weinberg

Published in: LIPIcs, Volume 362, 17th Innovations in Theoretical Computer Science Conference (ITCS 2026)


Abstract
Roughgarden [Roughgarden, 2020] initiates the study of Transaction Fee Mechanisms (TFMs), and posits that the on-chain game of a "good" TFM should be on-chain simple (OnC-S), i.e., incentive compatible for both the users and the miner. Recent work of Ganesh, Thomas an Weinberg [Ganesh et al., 2024] posit that they should additionally be Off-Chain Influence-Proof (OffC-IP), which means that the miner cannot achieve any additional revenue by separately conducting an off-chain auction to determine on-chain inclusion. They observe that a cryptographic second-price auction satisfies both properties, but leave open the question of whether other mechanisms (such as those not dependent on cryptography) satisfy these properties. In this paper, we characterize OffC-IP TFMs: They are those satisfying a burn identity relating the burn rule to the allocation rule. In particular, we show that auction is OffC-IP if and only if its (induced direct-revelation) allocation rule X̄(⋅) and burn rule B̅(⋅) (both of which take as input users' values v₁, … , v_n) are truthful when viewing (X̄(⋅), B̅(⋅)) as the allocation and pricing rule of a multi-item auction for a single additive buyer with values (φ(v₁),…, φ(v_n)) equal to the users' virtual values. Building on this burn identity, we characterize OffC-IP and OnC-S TFMs that are deterministic and do not use cryptography: They are posted-price mechanisms with specially-tuned burns. As a corollary, we show that such TFMs can only exist with infinite supply and prior-dependence. However, we show that for randomized TFMs, there are additional OnC-S and OffC-IP auctions that do not use cryptography (even when there is {finite} supply, under prior-dependence with a bounded prior distribution). Holistically, our results show that while OffC-IP is a fairly stringent requirement, families of OffC-IP mechanisms can be found for a variety of settings.

Cite as

Aadityan Ganesh, Clayton Thomas, and S. Matthew Weinberg. Characterizing Off-Chain Influence Proof Transaction Fee Mechanisms. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 65:1-65:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{ganesh_et_al:LIPIcs.ITCS.2026.65,
  author =	{Ganesh, Aadityan and Thomas, Clayton and Weinberg, S. Matthew},
  title =	{{Characterizing Off-Chain Influence Proof Transaction Fee Mechanisms}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{65:1--65:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.65},
  URN =		{urn:nbn:de:0030-drops-253527},
  doi =		{10.4230/LIPIcs.ITCS.2026.65},
  annote =	{Keywords: Transaction Fee Mechanism Design, Off-Chain Influence Proofness, Blockchain, Decentralized Finance, Simple Auctions}
}
Document
Decentralized Data Archival: New Definitions and Constructions

Authors: Elaine Shi, Rose Silver, and Changrui Mu

Published in: LIPIcs, Volume 362, 17th Innovations in Theoretical Computer Science Conference (ITCS 2026)


Abstract
We initiate the study of a new abstraction called incremental decentralized data archival (iDDA). Specifically, imagine that there is an ever-growing, massive database such as a blockchain, a comprehensive human knowledge base like Wikipedia, or the Internet archive. We want to build a decentralized archival system for such datasets to ensure long-term robustness and sustainability. We identify several important properties that an iDDA scheme should satisfy. First, to promote heterogeneity and decentralization, we want to encourage even weak nodes with limited space (e.g., users' home computers) to contribute. The minimum space requirement to contribute should be approximately independent of the data size. Second, if a collection of nodes together receive rewards commensurate with contributing a total of m blocks of space, then we want the following reassurances: 1) if m is at least the database size, we should be able to reconstruct the entire dataset; and 2) these nodes should actually be committing roughly m space in aggregate - specifically, when m is much larger than the data size, these nodes cannot store only one copy of the database, and be able to impersonate arbitrarily many pseudonyms and get unbounded rewards. We propose new definitions that mathematically formalize the aforementioned requirements of an iDDA scheme. We also devise an efficient construction in the random oracle model which satisfies the desired security requirements. Our scheme incurs only Õ(1) audit cost, as well as Õ(1) update cost for both the publisher and each node, where Õ(⋅) hides polylogarithmic factors. Further, the minimum space provisioning required to contribute is as small as polylogarithmic. Our construction exposes several interesting technical challenges. Specifically, we show that a straightforward application of the standard hierarchical data structure fails, since both our security definition and the underlying cryptographic primitives we employ lack the desired compositional guarantees. We devise novel techniques to overcome these compositional issues, resulting in a construction with provable security while still retaining efficiency. Finally, our new definitions also make a conceptual contribution, and lay the theoretical groundwork for the study of iDDA. We raise several interesting open problems along this direction.

Cite as

Elaine Shi, Rose Silver, and Changrui Mu. Decentralized Data Archival: New Definitions and Constructions. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 116:1-116:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{shi_et_al:LIPIcs.ITCS.2026.116,
  author =	{Shi, Elaine and Silver, Rose and Mu, Changrui},
  title =	{{Decentralized Data Archival: New Definitions and Constructions}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{116:1--116:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.116},
  URN =		{urn:nbn:de:0030-drops-254037},
  doi =		{10.4230/LIPIcs.ITCS.2026.116},
  annote =	{Keywords: Decentralized Data Archival}
}
Document
Identity Testing for Circuits with Exponentiation Gates

Authors: Jiatu Li and Mengdi Wu

Published in: LIPIcs, Volume 362, 17th Innovations in Theoretical Computer Science Conference (ITCS 2026)


Abstract
Motivated by practical applications in the design of optimization compilers for neural networks, we initiated the study of identity testing problems for arithmetic circuits augmented with exponentiation gates that compute the real function x↦ e^x. These circuits compute real functions of form P(→x)/P'(→x), where both P(→x) and P'(→x) are exponential polynomials ∑_{i = 1}^k f_i(→x)⋅ exp((g_i(→x))/(h_i(→x))), for polynomials f_i(→x),g_i(→x), and h_i(→x). We formalize a black-box query model over finite fields for this class of circuits, which is mathematical simple and reflects constraints faced by real-world neural network compilers. We proved that a simple and efficient randomized identity testing algorithm achieves perfect completeness and non-trivial soundness. Concurrent with our work, the algorithm has been implemented in the optimization compiler Mirage by Wu et al. (OSDI 2025), demonstrating promising empirical performance in both efficiency and soundness error. Finally, we propose a number-theoretic conjecture under which our algorithm is sound with high probability.

Cite as

Jiatu Li and Mengdi Wu. Identity Testing for Circuits with Exponentiation Gates. In 17th Innovations in Theoretical Computer Science Conference (ITCS 2026). Leibniz International Proceedings in Informatics (LIPIcs), Volume 362, pp. 95:1-95:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2026)


Copy BibTex To Clipboard

@InProceedings{li_et_al:LIPIcs.ITCS.2026.95,
  author =	{Li, Jiatu and Wu, Mengdi},
  title =	{{Identity Testing for Circuits with Exponentiation Gates}},
  booktitle =	{17th Innovations in Theoretical Computer Science Conference (ITCS 2026)},
  pages =	{95:1--95:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-410-9},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{362},
  editor =	{Saraf, Shubhangi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2026.95},
  URN =		{urn:nbn:de:0030-drops-253821},
  doi =		{10.4230/LIPIcs.ITCS.2026.95},
  annote =	{Keywords: Polynomial Identity Testing, Exponential Polynomials}
}
Document
Efficient Byzantine Reliable Broadcast in the Failure Case

Authors: Thomas Locher

Published in: LIPIcs, Volume 361, 29th International Conference on Principles of Distributed Systems (OPODIS 2025)


Abstract
Reliable broadcast is a fundamental primitive in distributed computing that is widely used in various applications. Several new reliable broadcast algorithms have been presented in recent years, primarily focusing on reducing the communication complexity, which is the total number of exchanged bits in the worst case. While significant progress has been achieved, all proposed algorithms share a common weakness. Executions may fail, i.e., no message is ever delivered, while incurring a communication complexity equal or nearly equal to the communication complexity of executions where a message is delivered. In fact, a single Byzantine node, acting as the dedicated sender, is sufficient to trigger such executions, causing all nodes to consume bandwidth in vain. This paper introduces the novel concept of a reliable broadcast detector, a distributed algorithm that can be coupled with a reliable broadcast algorithm to minimize the communication complexity of failed executions. Two concrete detectors are presented with different requirements and properties. Additionally, reliable broadcast algorithms that utilize detectors are introduced, the main algorithm guaranteeing an overhead factor, compared to an ideal failure-free execution, that tends to 2 as the network size increases. Furthermore, a lower bound is proven that an overhead factor of 5/3 is inevitable when the sender initially broadcasts the message, as is the case for the proposed algorithm. Therefore, it achieves a bound that is close to optimal for any algorithm with this property.

Cite as

Thomas Locher. Efficient Byzantine Reliable Broadcast in the Failure Case. In 29th International Conference on Principles of Distributed Systems (OPODIS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 361, pp. 12:1-12:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{locher:LIPIcs.OPODIS.2025.12,
  author =	{Locher, Thomas},
  title =	{{Efficient Byzantine Reliable Broadcast in the Failure Case}},
  booktitle =	{29th International Conference on Principles of Distributed Systems (OPODIS 2025)},
  pages =	{12:1--12:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-409-3},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{361},
  editor =	{Arusoaie, Andrei and Onica, Emanuel and Spear, Michael and Tucci-Piergiovanni, Sara},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.OPODIS.2025.12},
  URN =		{urn:nbn:de:0030-drops-251854},
  doi =		{10.4230/LIPIcs.OPODIS.2025.12},
  annote =	{Keywords: asynchronous networks, reliable broadcast, communication complexity}
}
Document
Asynchronous Approximate Agreement with Quadratic Communication

Authors: Mose Mizrahi Erbes and Roger Wattenhofer

Published in: LIPIcs, Volume 361, 29th International Conference on Principles of Distributed Systems (OPODIS 2025)


Abstract
We study approximate agreement in an asynchronous network of n parties, up to t of which are byzantine. This an agreement task where the parties obtain approximately equal inputs in the convex hull of their inputs. In an asynchronous network, it can be solved with the optimal resilience t < n/3 by forcing the parties to reliably broadcast their messages and thus preventing inconsistent byzantine behavior. This costs Θ(n²) messages per reliable broadcast, or Θ(n³) messages per protocol iteration. In this work, we forgo reliable broadcast to achieve asynchronous approximate agreement against t < n/3 faults with quadratic communication. In a tree with the maximum degree Δ and the centroid decomposition height h, we achieve edge agreement (agreement on two adjacent vertices) in at most 6h + 1 rounds with 𝒪(n²) messages of size 𝒪(log Δ + log h) per round. We do this by designing a 6-round multivalued 2-graded consensus protocol and using it to construct a recursive edge agreement protocol. Then, we achieve edge agreement in the infinite path ℤ, again by using 2-graded consensus. Finally, we show that our edge agreement protocol enables approximate agreement in ℝ (with outputs that are at most some small parameter ε > 0 apart) in 6log₂M/(ε) + 𝒪(log log M/(ε)) rounds with 𝒪(n²) messages of size 𝒪(log log M/(ε)) per round, where M is the maximum non-byzantine input magnitude.

Cite as

Mose Mizrahi Erbes and Roger Wattenhofer. Asynchronous Approximate Agreement with Quadratic Communication. In 29th International Conference on Principles of Distributed Systems (OPODIS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 361, pp. 16:1-16:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{mizrahierbes_et_al:LIPIcs.OPODIS.2025.16,
  author =	{Mizrahi Erbes, Mose and Wattenhofer, Roger},
  title =	{{Asynchronous Approximate Agreement with Quadratic Communication}},
  booktitle =	{29th International Conference on Principles of Distributed Systems (OPODIS 2025)},
  pages =	{16:1--16:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-409-3},
  ISSN =	{1868-8969},
  year =	{2026},
  volume =	{361},
  editor =	{Arusoaie, Andrei and Onica, Emanuel and Spear, Michael and Tucci-Piergiovanni, Sara},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.OPODIS.2025.16},
  URN =		{urn:nbn:de:0030-drops-251890},
  doi =		{10.4230/LIPIcs.OPODIS.2025.16},
  annote =	{Keywords: Approximate agreement, byzantine fault tolerance, communication complexity}
}
Document
Clustering in Varying Metrics

Authors: Deeparnab Chakrabarty, Jonathan Conroy, and Ankita Sarkar

Published in: LIPIcs, Volume 360, 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)


Abstract
We introduce the aggregated clustering problem, where one is given T instances of a center-based clustering task over the same n points, but under different metrics. The goal is to open k centers to minimize an aggregate of the clustering costs - e.g., the average or maximum - where the cost is measured via k-center/median/means objectives. More generally, we minimize a norm Ψ over the T cost values. We show that for T ≥ 3, the problem is inapproximable to any finite factor in polynomial time. For T = 2, we give constant-factor approximations. We also show W[2]-hardness when parameterized by k, but obtain f(k,T)poly(n)-time 3-approximations when parameterized by both k and T. When the metrics have structure, we obtain efficient parameterized approximation schemes (EPAS). If all T metrics have bounded ε-scatter dimension, we achieve a (1+ε)-approximation in f(k,T,ε)poly(n) time. If the metrics are induced by edge weights on a common graph G of bounded treewidth tw, and Ψ is the sum function, we get an EPAS in f(T,ε,tw)poly(n,k) time. Conversely, unless (randomized) ETH is false, any finite factor approximation is impossible if parametrized by only T, even when the treewidth is tw = Ω(polylog n).

Cite as

Deeparnab Chakrabarty, Jonathan Conroy, and Ankita Sarkar. Clustering in Varying Metrics. In 45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 360, pp. 19:1-19:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{chakrabarty_et_al:LIPIcs.FSTTCS.2025.19,
  author =	{Chakrabarty, Deeparnab and Conroy, Jonathan and Sarkar, Ankita},
  title =	{{Clustering in Varying Metrics}},
  booktitle =	{45th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2025)},
  pages =	{19:1--19:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-406-2},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{360},
  editor =	{Aiswarya, C. and Mehta, Ruta and Roy, Subhajit},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2025.19},
  URN =		{urn:nbn:de:0030-drops-251007},
  doi =		{10.4230/LIPIcs.FSTTCS.2025.19},
  annote =	{Keywords: Clustering, approximation algorithms, LP rounding, parameterized and exact algorithms, dynamic programming, fixed parameter tractability, hardness of approximation}
}
Document
Brief Announcement
Brief Announcement: From Few to Many Faults: Adaptive Byzantine Agreement with Optimal Communication

Authors: Andrei Constantinescu, Marc Dufay, Anton Paramonov, and Roger Wattenhofer

Published in: LIPIcs, Volume 356, 39th International Symposium on Distributed Computing (DISC 2025)


Abstract
We study the problem of Strong Byzantine Agreement and establish tight upper and lower bounds on communication complexity, parameterized by the actual number of Byzantine faults. Specifically, for a system of n parties tolerating up to t Byzantine faults, out of which only f ≤ t are actually faulty, we obtain the following results: In the partially synchronous setting, we present the first Byzantine Agreement protocol that achieves adaptive communication complexity of 𝒪(n + t ⋅ f) words, which is asymptotically optimal. Our protocol has an optimal resilience of t < n/3. In the asynchronous setting, we prove a lower bound of Ω(n + t²) on the expected number of messages, and design an almost matching protocol with an optimal resilience that solves agreement with 𝒪((n + t²)⋅ log n) words. Our main technical contribution in the asynchronous setting is the utilization of a bipartite expander graph that allows for low-cost information dissemination.

Cite as

Andrei Constantinescu, Marc Dufay, Anton Paramonov, and Roger Wattenhofer. Brief Announcement: From Few to Many Faults: Adaptive Byzantine Agreement with Optimal Communication. In 39th International Symposium on Distributed Computing (DISC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 356, pp. 52:1-52:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{constantinescu_et_al:LIPIcs.DISC.2025.52,
  author =	{Constantinescu, Andrei and Dufay, Marc and Paramonov, Anton and Wattenhofer, Roger},
  title =	{{Brief Announcement: From Few to Many Faults: Adaptive Byzantine Agreement with Optimal Communication}},
  booktitle =	{39th International Symposium on Distributed Computing (DISC 2025)},
  pages =	{52:1--52:8},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-402-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{356},
  editor =	{Kowalski, Dariusz R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2025.52},
  URN =		{urn:nbn:de:0030-drops-248680},
  doi =		{10.4230/LIPIcs.DISC.2025.52},
  annote =	{Keywords: Byzantine Agreement, Communication Complexity, Adaptive Communication Complexity, Resilience}
}
Document
Hierarchical Consensus: Scalability Through Optimism and Weak Liveness

Authors: Pedro Antonino, Antoine Durand, and A. W. Roscoe

Published in: LIPIcs, Volume 356, 39th International Symposium on Distributed Computing (DISC 2025)


Abstract
Scalability is a central concern of Byzantine Fault Tolerant (BFT) distributed protocols. The ubiquitous approach to work around the well-known Dolev-Reischuk Ω(n²) communication complexity lower bound is to use a random selection process to draw a hopefully small committee from a population of agents to run the communication-heavy protocol. We propose a notion of hierarchical consensus that combines two sub-protocols: an optimistic primary sub-protocol that can tolerate less than 1/2 failures and a fallback secondary protocol that can tolerate less than 1/3 failures; we achieve the higher failure threshold by requiring a weaker notion of liveness for the primary. This distinction between the level of fault tolerance between primary and secondary is reflected in the size of committees implementing these protocols. For a population of agents with close to 2/3 of honest agents, we need to select a committee with hundreds of agents to reach the level of tolerance expected for the primary, whereas we need thousands to reach the level expected for the secondary with a very small probability of error ε. Our hierarchical construct is such that if the primary comes to a decision, it can simply propagate it to the secondary protocol, so it does not need to properly engage in an agreement protocol independently. Our architecture is flexible and allows us to use our technique for most protocols that are based on random sampling. By studying hierarchical protocols, we discovered new theoretical results of independent interest. Specifically, the ability to handover from a primary protocol requires a new Justifiability property that allows agents to pre-decide on a value, such that if the protocol decides, it must be on that pre-decided value.

Cite as

Pedro Antonino, Antoine Durand, and A. W. Roscoe. Hierarchical Consensus: Scalability Through Optimism and Weak Liveness. In 39th International Symposium on Distributed Computing (DISC 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 356, pp. 6:1-6:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{antonino_et_al:LIPIcs.DISC.2025.6,
  author =	{Antonino, Pedro and Durand, Antoine and Roscoe, A. W.},
  title =	{{Hierarchical Consensus: Scalability Through Optimism and Weak Liveness}},
  booktitle =	{39th International Symposium on Distributed Computing (DISC 2025)},
  pages =	{6:1--6:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-402-4},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{356},
  editor =	{Kowalski, Dariusz R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2025.6},
  URN =		{urn:nbn:de:0030-drops-248232},
  doi =		{10.4230/LIPIcs.DISC.2025.6},
  annote =	{Keywords: Hierarchical, Handover, Justifiability, Consensus, Distributed Systems, Blockchain}
}
Document
Research
GraphRAG on Technical Documents - Impact of Knowledge Graph Schema

Authors: Henri Scaffidi, Melinda Hodkiewicz, Caitlin Woods, and Nicole Roocke

Published in: TGDK, Volume 3, Issue 2 (2025). Transactions on Graph Data and Knowledge, Volume 3, Issue 2


Abstract
Retrieval Augmented Generation (RAG) is seeing rapid adoption in industry to enable employees to query information captured in proprietary data for their organisation. In this work, we test the impact of domain-relevant knowledge graph schemas on the results of Microsoft’s GraphRAG pipeline. Our approach aims to address the poor quality of GraphRAG responses on technical reports rich in domain-specific terms. The use case involves technical reports about geology, chemistry and mineral processing published by the Minerals Research Institute of Western Australia (MRIWA). Four schemas are considered: a simple five-class minerals domain expert-developed schema, an expanded minerals domain schema, the Microsoft GraphRAG auto-generated schema, and a schema-less GraphRAG. These are compared to a conventional baseline RAG. Performance is evaluated using a scoring approach that accounts for the mix of correct, incorrect, additional, and missing content in RAG responses. The results show that the simple five-class minerals domain schema extracts approximately 10% more entities from the MRIWA reports than the other schema options. Additionally, both the five-class and the expanded eight-class minerals domain schemas produce the most factually correct answers and the fewest hallucinations. We attribute this to the minerals-specific schemas extracting more relevant, domain-specific information during the Indexing stage. As a result, the Query stage’s context window includes more high-value content. This contributes to the observed improvement in answer quality compared to the other pipelines. In contrast, pipelines with fewer domain-related entities in the KG retrieve less valuable information, leaving more room for irrelevant content in the context window. Baseline RAG responses were typically shorter, less complete, and contained more hallucinations compared to our GraphRAG pipelines. We provide a complete set of resources at https://github.com/nlp-tlp/GraphRAG-on-Minerals-Domain/tree/main. These resources include links to the MRIWA reports, a set of questions (from simple to challenging) along with domain-expert curated answers, schemas, and evaluations of the pipelines.

Cite as

Henri Scaffidi, Melinda Hodkiewicz, Caitlin Woods, and Nicole Roocke. GraphRAG on Technical Documents - Impact of Knowledge Graph Schema. In Transactions on Graph Data and Knowledge (TGDK), Volume 3, Issue 2, pp. 3:1-3:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@Article{scaffidi_et_al:TGDK.3.2.3,
  author =	{Scaffidi, Henri and Hodkiewicz, Melinda and Woods, Caitlin and Roocke, Nicole},
  title =	{{GraphRAG on Technical Documents - Impact of Knowledge Graph Schema}},
  journal =	{Transactions on Graph Data and Knowledge},
  pages =	{3:1--3:24},
  ISSN =	{2942-7517},
  year =	{2025},
  volume =	{3},
  number =	{2},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/TGDK.3.2.3},
  URN =		{urn:nbn:de:0030-drops-248131},
  doi =		{10.4230/TGDK.3.2.3},
  annote =	{Keywords: RAG, minerals, local search, global search, entity extraction, competency questions}
}
Document
Proxying Is Enough: Security of Proxying in TLS Oracles and AEAD Context Unforgeability

Authors: Zhongtang Luo, Yanxue Jia, Yaobin Shen, and Aniket Kate

Published in: LIPIcs, Volume 354, 7th Conference on Advances in Financial Technologies (AFT 2025)


Abstract
TLS allows a client to securely obtain data from a server, but does not allow the client to offer the data provenance to an external node. TLS oracle protocols are used to solve the problem. Specifically, the verifier node, as an external node, is convinced that the data is indeed coming from a pre-defined TLS server, while remaining unable to access the client’s credentials (e.g., password). Previous TLS oracle protocols such as DECO (CCS 2020) enforced the communication pattern of server-client-verifier and utilized a novel three-party handshake process during TLS to ensure data integrity against potential tempering by the client. However, this approach introduces a significant performance penalty on the client and the verifier. Most recently, some works have proposed to reduce the overhead by putting the verifier (as a proxy) between the server and the client such that the correct TLS transcript is available to the verifier. Nevertheless, these works still rely on heavy two-party secure computations or zero-knowledge proofs. In this work, we push the proxy model to the extreme, where the verifier only needs to forward messages without performing any other heavy computational operations when only the credentials should be protected and the data retrieved from the server could be open to the verifier. Surprisingly, we prove that the thorough proxy model is enough to guarantee security in some common scenarios, allowing a saving of 60-90% in running time under common scenarios. We first formalize the proxy-based Oracle protocol and functionality that allows the verifier to directly proxy client-server TLS communication, without entering a three-party handshake or interfering with the connection in any way. We then show that for common TLS-based higher-level protocols such as HTTPS, data integrity to the verifier proxy is ensured by the variable padding built into the HTTP protocol semantics. On the other hand, if a TLS-based protocol comes without variable padding, we demonstrate that data integrity cannot be guaranteed. In this context, we then study the case where the TLS response is pre-determined and cannot be tampered with during the connection. We propose the concept of context unforgeability and show that data integrity can also be guaranteed as long as the underlying Authenticated Encryption with Associated Data (AEAD) satisfies context unforgeability. We further show that ChaCha20-Poly1305 satisfies the concept while AES-GCM does not.

Cite as

Zhongtang Luo, Yanxue Jia, Yaobin Shen, and Aniket Kate. Proxying Is Enough: Security of Proxying in TLS Oracles and AEAD Context Unforgeability. In 7th Conference on Advances in Financial Technologies (AFT 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 354, pp. 4:1-4:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{luo_et_al:LIPIcs.AFT.2025.4,
  author =	{Luo, Zhongtang and Jia, Yanxue and Shen, Yaobin and Kate, Aniket},
  title =	{{Proxying Is Enough: Security of Proxying in TLS Oracles and AEAD Context Unforgeability}},
  booktitle =	{7th Conference on Advances in Financial Technologies (AFT 2025)},
  pages =	{4:1--4:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-400-0},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{354},
  editor =	{Avarikioti, Zeta and Christin, Nicolas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2025.4},
  URN =		{urn:nbn:de:0030-drops-247231},
  doi =		{10.4230/LIPIcs.AFT.2025.4},
  annote =	{Keywords: Oracle, TLS, AEAD, Key Commitment}
}
Document
On-Chain Decentralized Learning and Cost-Effective Inference for DeFi Attack Mitigation

Authors: Abdulrahman Alhaidari, Balaji Palanisamy, and Prashant Krishnamurthy

Published in: LIPIcs, Volume 354, 7th Conference on Advances in Financial Technologies (AFT 2025)


Abstract
Billions of dollars are lost every year in DeFi platforms by transactions exploiting business logic or accounting vulnerabilities. Existing defenses focus on static code analysis, public mempool screening, attacker contract detection, or trusted off-chain monitors, none of which prevents exploits submitted through private relays or malicious contracts that execute within the same block. We present the first decentralized, fully on-chain learning framework that: (i) performs gas-prohibitive computation on Layer-2 to reduce cost, (ii) propagates verified model updates to Layer-1, and (iii) enables gas-bounded, low-latency inference inside smart contracts. A novel Proof-of-Improvement (PoIm) protocol governs the training process and verifies each decentralized micro update as a self-verifying training transaction. Updates are accepted by PoIm only if they demonstrably improve at least one core metric (e.g., accuracy, F1-score, precision, or recall) on a public benchmark without degrading any of the other core metrics, while adversarial proposals get financially penalized through an adaptable test set for evolving threats. We develop quantization and loop-unrolling techniques that enable inference for logistic regression, SVM, MLPs, CNNs, and gated RNNs (with support for formally verified decision tree inference) within the Ethereum block gas limit, while remaining bit-exact to their off-chain counterparts, formally proven in Z3. We curate 298 unique real-world exploits (2020 - 2025) with 402 exploit transactions across eight EVM chains, collectively responsible for $3.74 B in losses. We demonstrate that on-chain ML governed by PoIm detects previously unseen attacks with over 97% attack detection accuracy and 82.0% F1. A single inference, such as one made via an external call, typically incurs zero cost. Fully on-chain inference consumes 57,603 gas (≈ $0.18) for linear models, 143,647 gas (≈ $0.49) for CNN(F2, K1), and 506,397 gas (≈ $1.77) for CNN(F8, K4) on L1 (e.g., Ethereum). Our results show that practical and continually evolving DeFi defenses can be embedded directly in protocol logic without trusted guardians, and our solution achieves highly cost-effective protection while filling a critical gap between vulnerability scanners and real-time transaction screening.

Cite as

Abdulrahman Alhaidari, Balaji Palanisamy, and Prashant Krishnamurthy. On-Chain Decentralized Learning and Cost-Effective Inference for DeFi Attack Mitigation. In 7th Conference on Advances in Financial Technologies (AFT 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 354, pp. 35:1-35:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{alhaidari_et_al:LIPIcs.AFT.2025.35,
  author =	{Alhaidari, Abdulrahman and Palanisamy, Balaji and Krishnamurthy, Prashant},
  title =	{{On-Chain Decentralized Learning and Cost-Effective Inference for DeFi Attack Mitigation}},
  booktitle =	{7th Conference on Advances in Financial Technologies (AFT 2025)},
  pages =	{35:1--35:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-400-0},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{354},
  editor =	{Avarikioti, Zeta and Christin, Nicolas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2025.35},
  URN =		{urn:nbn:de:0030-drops-247548},
  doi =		{10.4230/LIPIcs.AFT.2025.35},
  annote =	{Keywords: DeFi attacks, on-chain machine learning, decentralized learning, real-time defense}
}
Document
Cache Timing Leakages in Zero-Knowledge Protocols

Authors: Shibam Mukherjee, Christian Rechberger, and Markus Schofnegger

Published in: LIPIcs, Volume 354, 7th Conference on Advances in Financial Technologies (AFT 2025)


Abstract
The area of modern zero-knowledge proof systems has seen a significant rise in popularity over the last couple of years, with new techniques and optimized constructions emerging on a regular basis. As the field matures, the aspect of implementation attacks becomes more relevant, however side-channel attacks on zero-knowledge proof systems have seen surprisingly little treatment so far. In this paper, we give an overview of potential attack vectors and show that some of the underlying finite field libraries, and implementations of heavily used components like hash functions using them, are vulnerable w.r.t. cache attacks on CPUs. On the positive side, we demonstrate that the computational overhead to protect against these attacks is relatively small.

Cite as

Shibam Mukherjee, Christian Rechberger, and Markus Schofnegger. Cache Timing Leakages in Zero-Knowledge Protocols. In 7th Conference on Advances in Financial Technologies (AFT 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 354, pp. 1:1-1:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{mukherjee_et_al:LIPIcs.AFT.2025.1,
  author =	{Mukherjee, Shibam and Rechberger, Christian and Schofnegger, Markus},
  title =	{{Cache Timing Leakages in Zero-Knowledge Protocols}},
  booktitle =	{7th Conference on Advances in Financial Technologies (AFT 2025)},
  pages =	{1:1--1:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-400-0},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{354},
  editor =	{Avarikioti, Zeta and Christin, Nicolas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2025.1},
  URN =		{urn:nbn:de:0030-drops-247201},
  doi =		{10.4230/LIPIcs.AFT.2025.1},
  annote =	{Keywords: zero-knowledge, protocol, cache timing, side-channel, leakage}
}
Document
From Permissioned to Proof-of-Stake Consensus

Authors: Jovan Komatovic, Andrew Lewis-Pye, Joachim Neu, Tim Roughgarden, and Ertem Nusret Tas

Published in: LIPIcs, Volume 354, 7th Conference on Advances in Financial Technologies (AFT 2025)


Abstract
This paper presents the first generic compiler that transforms any permissioned consensus protocol into a proof-of-stake permissionless consensus protocol. For each of the following properties, if the initial permissioned protocol satisfies that property in the partially synchronous setting, the consequent proof-of-stake protocol also satisfies that property in the partially synchronous and quasi-permissionless setting (with the same fault-tolerance): consistency; liveness; optimistic responsiveness; every composable log-specific property; and message complexity of a given order. Moreover, our transformation ensures that the output protocol satisfies accountability (identifying culprits in the event of a consistency violation), whether or not the original permissioned protocol satisfied it.

Cite as

Jovan Komatovic, Andrew Lewis-Pye, Joachim Neu, Tim Roughgarden, and Ertem Nusret Tas. From Permissioned to Proof-of-Stake Consensus. In 7th Conference on Advances in Financial Technologies (AFT 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 354, pp. 18:1-18:26, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{komatovic_et_al:LIPIcs.AFT.2025.18,
  author =	{Komatovic, Jovan and Lewis-Pye, Andrew and Neu, Joachim and Roughgarden, Tim and Tas, Ertem Nusret},
  title =	{{From Permissioned to Proof-of-Stake Consensus}},
  booktitle =	{7th Conference on Advances in Financial Technologies (AFT 2025)},
  pages =	{18:1--18:26},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-400-0},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{354},
  editor =	{Avarikioti, Zeta and Christin, Nicolas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2025.18},
  URN =		{urn:nbn:de:0030-drops-247373},
  doi =		{10.4230/LIPIcs.AFT.2025.18},
  annote =	{Keywords: Permissioned Consensus, Proof-of-Stake, generic Compiler, Blockchain}
}
Document
Two-Tier Black-Box Blockchains and Application to Instant Layer-1 Payments

Authors: Michele Ciampi, Yun Lu, Rafail Ostrovsky, and Vassilis Zikas

Published in: LIPIcs, Volume 354, 7th Conference on Advances in Financial Technologies (AFT 2025)


Abstract
Common blockchain protocols are monolithic, i.e., their security relies on a single assumption, e.g., honest majority of hashing power (Bitcoin) or stake (Cardano, Algorand, Ethereum). In contrast, so-called optimistic approaches (Thunderella, Meshcash) rely on a combination of assumptions to achieve faster transaction liveness. We revisit, redesign, and augment the optimistic paradigm to a tiered approach. Our design assumes a primary (Tier 1) and a secondary (Tier 2, also referred to as fallback) blockchain, and achieves full security also in a tiered fashion: If the assumption underpinning the primary chain holds, then we guarantee safety, liveness and censorship resistance, irrespectively of the status of the fallback chain. And even if the primary assumption fails, all security properties are still satisfied (albeit with a temporary slow down) provided the fallback assumption holds. To our knowledge, no existing optimistic or tiered approach preserves both safety and liveness when any one of its underlying blockchain (assumptions) fails. The above is achieved by a new detection-and-recovery mechanism that links the two blockchains, so that any violation of safety, liveness, or censorship resistance on the (faster) primary blockchain is temporary - it is swiftly detected and recovered on the secondary chain - and thus cannot result in a persistent fork or halt of the blockchain ledger. We instantiate the above paradigm using a primary chain based on proof of reputation (PoR) and a fallback chain based on proof of stake (PoS). Our construction uses the PoR and PoS blockchains in a mostly black-box manner - where rather than assuming a concrete construction we distil abstract properties on the two blockchains that are sufficient for applying our tiered methodology. In fact, choosing reputation as the resource of the primary chain opens the door to an incentive mechanism - which we devise and analyze - that tokenizes reputation in order to deter cheating and boost participation (on both the primary/PoR and the fallback/PoS blockchain). As we demonstrate, such tokenization in combination with interpreting reputation as a built-in system-wide credit score, allows for embedding in our two-tiered methodology a novel mechanism which provides collateral-free, multi-use payment-channel-like functionality where payments can be instantly confirmed.

Cite as

Michele Ciampi, Yun Lu, Rafail Ostrovsky, and Vassilis Zikas. Two-Tier Black-Box Blockchains and Application to Instant Layer-1 Payments. In 7th Conference on Advances in Financial Technologies (AFT 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 354, pp. 19:1-19:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{ciampi_et_al:LIPIcs.AFT.2025.19,
  author =	{Ciampi, Michele and Lu, Yun and Ostrovsky, Rafail and Zikas, Vassilis},
  title =	{{Two-Tier Black-Box Blockchains and Application to Instant Layer-1 Payments}},
  booktitle =	{7th Conference on Advances in Financial Technologies (AFT 2025)},
  pages =	{19:1--19:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-400-0},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{354},
  editor =	{Avarikioti, Zeta and Christin, Nicolas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2025.19},
  URN =		{urn:nbn:de:0030-drops-247380},
  doi =		{10.4230/LIPIcs.AFT.2025.19},
  annote =	{Keywords: Fault tolerant blockchain, instantly confirmed payments}
}
Document
Nakamoto Consensus from Multiple Resources

Authors: Mirza Ahad Baig, Christoph U. Günther, and Krzysztof Pietrzak

Published in: LIPIcs, Volume 354, 7th Conference on Advances in Financial Technologies (AFT 2025)


Abstract
The blocks in the Bitcoin blockchain "record" the amount of work W that went into creating them through proofs of work. When honest parties control a majority of the work, consensus is achieved by picking the chain with the highest recorded weight. Resources other than work have been considered to secure such longest-chain blockchains. In Chia, blocks record the amount of disk-space S (via a proof of space) and sequential computational steps V (through a VDF). In this paper, we ask what weight functions Γ(S,V,W) (that assign a weight to a block as a function of the recorded space, speed, and work) are secure in the sense that whenever the weight of the resources controlled by honest parties is larger than the weight of adversarial parties, the blockchain is secure against private double-spending attacks. We completely classify such functions in an idealized "continuous" model: Γ(S,V,W) is secure against private double-spending attacks if and only if it is homogeneous of degree one in the "timed" resources V and W, i.e., αΓ(S,V,W) = Γ(S,α V, α W). This includes the Bitcoin rule Γ(S,V,W) = W and the Chia rule Γ(S,V,W) = S ⋅ V. In a more realistic model where blocks are created at discrete time-points, one additionally needs some mild assumptions on the dependency on S (basically, the weight should not grow too much if S is slightly increased, say linear as in Chia). Our classification is more general and allows various instantiations of the same resource. It provides a powerful tool for designing new longest-chain blockchains. E.g., consider combining different PoWs to counter centralization, say the Bitcoin PoW W₁ and a memory-hard PoW W₂. Previous work suggested to use W₁+W₂ as weight. Our results show that using e.g., √{W₁}⋅ √{W₂} or min{W₁,W₂} are also secure, and we argue that in practice these are much better choices.

Cite as

Mirza Ahad Baig, Christoph U. Günther, and Krzysztof Pietrzak. Nakamoto Consensus from Multiple Resources. In 7th Conference on Advances in Financial Technologies (AFT 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 354, pp. 16:1-16:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{baig_et_al:LIPIcs.AFT.2025.16,
  author =	{Baig, Mirza Ahad and G\"{u}nther, Christoph U. and Pietrzak, Krzysztof},
  title =	{{Nakamoto Consensus from Multiple Resources}},
  booktitle =	{7th Conference on Advances in Financial Technologies (AFT 2025)},
  pages =	{16:1--16:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-400-0},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{354},
  editor =	{Avarikioti, Zeta and Christin, Nicolas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.AFT.2025.16},
  URN =		{urn:nbn:de:0030-drops-247353},
  doi =		{10.4230/LIPIcs.AFT.2025.16},
  annote =	{Keywords: Nakamoto Consensus, Heaviest-chain Rule, Resource Theory}
}
  • Refine by Type
  • 61 Document/PDF
  • 55 Document/HTML

  • Refine by Publication Year
  • 5 2026
  • 47 2025
  • 1 2024
  • 4 2023
  • 2 2022
  • Show More...

  • Refine by Author
  • 3 Ostrovsky, Rafail
  • 3 Wattenhofer, Roger
  • 2 Aldrich, Jonathan
  • 2 Constantinescu, Andrei
  • 2 Jones, Chris
  • Show More...

  • Refine by Series/Journal
  • 52 LIPIcs
  • 2 OASIcs
  • 1 DARTS
  • 1 LITES
  • 5 TGDK

  • Refine by Classification
  • 8 Theory of computation → Distributed algorithms
  • 4 Computing methodologies → Knowledge representation and reasoning
  • 4 Security and privacy → Distributed systems security
  • 3 Mathematics of computing → Probability and statistics
  • 3 Security and privacy → Cryptography
  • Show More...

  • Refine by Keyword
  • 4 Blockchain
  • 3 Large Language Models
  • 2 Blockchains
  • 2 Byzantine Agreement
  • 2 Communication Complexity
  • Show More...

Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail