Abstract 1 Introduction 2 Model 3 Overcollateralization Provides Robust Security 4 Lower Bounds for Global Security 5 Local Security 6 A Local Security Condition for Stable Attacks 7 Lower Bounds for Local Security 8 Long Cascades References

Robust Restaking Networks

Naveen Durvasula ORCID Columbia University, New York, NY, USA Tim Roughgarden ORCID a16zcrypto, New York, NY, USA
Columbia University, New York, NY, USA
Abstract

We study the risks of validator reuse across multiple services in a restaking protocol. We characterize the robust security of a restaking network as a function of the buffer between the costs and profits from attacks. For example, our results imply that if attack costs always exceed attack profits by 10%, then a sudden loss of .1% of the overall stake (e.g., due to a software error) cannot result in the ultimate loss of more than 1.1% of the overall stake. We also provide local analogs of these overcollateralization conditions and robust security guarantees that apply specifically for a target service or coalition of services. All of our bounds on worst-case stake loss are the best possible. Finally, we bound the maximum-possible length of a cascade of attacks.

Our results suggest measures of robustness that could be exposed to the participants in a restaking protocol. We also suggest polynomial-time computable sufficient conditions that can proxy for these measures.

Keywords and phrases:
Proof of stake, Restaking, Staking Risks
Copyright and License:
[Uncaptioned image] © Naveen Durvasula and Tim Roughgarden; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Networks Network economics
Related Version:
Full Version: https://arxiv.org/abs/2407.21785
Acknowledgements:
We thank Tarun Chitra, Soubhik Deb, Sreeram Kannan, Mike Neuder, and Mallesh Pai for comments on earlier drafts of this paper. We thank Soubhik and Sreeram in particular for emphasizing the importance of local guarantees.
Funding:
This research was supported in part by NSF awards CCF-2006737 and CNS-2212745, and research awards from the Briger Family Digital Finance Lab and the Center for Digital Finance and Technologies.
Editors:
Raghu Meka

1 Introduction

1.1 Sharing Validators Across Services

Major blockchain protocols such as Bitcoin or Ethereum are “decentralized,” meaning that transaction execution is carried out by a large and diverse set of “validators.” Such protocols offer a form of “trusted computation,” in the sense that, because they are decentralized, no one individual or entity can easily interfere with their execution. A decentralized and Turing-complete smart contract platform such as Ethereum can then be viewed as a trusted programmable computer capable of performing arbitrary computations.

While Turing-complete, the computing functionality offered by Ethereum smart contracts suffers from limitations imposed by design decisions in the underlying consensus protocol. Most obviously, computation and storage in the Ethereum virtual machine is scarce, with perhaps 15–20 transactions processed per second. Could the Ethereum protocol be somehow bypassed, opening the door for different or more powerful computing functionality, while retaining at least some of the protocol’s decentralization? Or, what about applications that are not compatible with all Ethereum validators, perhaps due to demanding hardware requirements or regulatory constraints?

One natural approach to addressing these challenges is to allow the reuse of a blockchain protocol’s validators across multiple services, where a “service” is some task that could be carried out by some subset of validators. (The initial blockchain protocol can be viewed as the canonical service, performed by all validators.) For example, such services could include alternative consensus protocols (perhaps with higher throughput, a different virtual machine, or different consistency-liveness trade-offs), storage (“data availability”), or verifiable off-chain computation (“zk coprocessors”).111Several projects, in various stages of production, are currently exploring this idea; the Eigenlayer restaking protocol [15] is perhaps the most well known of them. We stress that our goal here is to develop a model that isolates some of the fundamental challenges and risks of validator reuse, independent of any specific implementation of the idea.

The obvious danger of validator reuse is an increased risk of a validator deviating from its intended behavior (e.g., due to overwhelming computational responsibilities). Our focus here is deliberate validator deviations in response to economic incentives, such as the profits that could be obtained by corrupting one or more services. The goal of this paper is to quantify such risks:

Under what conditions can validators be safely reused across multiple services?

1.2 Cryptoeconomic Security

We first review the usual “cryptoeconomic” approach to answering a more basic question: when is a blockchain protocol, without any additional services, “safe from attack”? The idea is to perform a cost-benefit analysis from the perspective of an attacker, and declare the protocol safe if the cost of carrying out an attack exceeds the profit that the attacker can expect from it; see also Figure 1. In the specific case of a proof-of-stake blockchain protocol with slashing (such as Ethereum), the cost can be estimated as the value of the validator stake that would be lost to slashing following an attack. For example, let V denote the set of validators of a proof-of-stake blockchain protocol and σv the stake of validator vV (e.g., 32 ETH in Ethereum). In a typical PBFT-type protocol in which double-voting validators lose all of their stake, the cost of an attack (i.e., causing a consistency violation) can be bounded below by 13vVσv. In this case, the protocol can be regarded as cryptoeconomically secure provided the estimated profit π of an attack is less than this quantity. To the extent that there is a “buffer” between 13vVσv and π, the protocol can be treated as “robustly secure,” meaning secure even after a sudden loss of some amount of stake (e.g., due to slashing caused by a software error).

Figure 1: A blockchain protocol operated by a collection of validators, with π denoting the profit of successfully attacking the protocol and σv the amount of stake posted by a validator v as collateral.

1.3 Our Results: Robustly Secure Validator Reuse

Viewing the basic scenario of a blockchain protocol as a star graph (with one “service” representing the protocol connected to all the validators running it), the more complex scenario of validators reused across multiple services can be viewed as an arbitrary bipartite graph (Figure 2). As before, we suppose that each validator vV has some stake σv that can be confiscated in the event that v participates in an attack on a service that it has agreed to support. There is now a set S of services, with πs denoting the profit an attacker would obtain by compromising the service sS.222We follow [15] and assume that the estimates on attack profitability (the πs’s) are given. Developing tools to help produce such estimates in practice is an important open research direction. We assume that compromising a service sS requires the participation of an αs fraction of the overall validator stake supporting v (e.g., αs=1/3). With this expanded network formalism to capture multiple services, when should we consider a network to be “secure”?

Figure 2: A general restaking network, with validators reused across multiple services.

Intuitively, a network is insecure if there is a set of services that can be profitably attacked. This requires two conditions to be satisfied: the validators BV carrying out the attack must control sufficient stake to do so (for each service s in the attacked set A, the validators of B control at least a αs fraction of the staked pledged to s), and they must profit from it (i.e., sAπs>vBσv). We call such a pair (A,B) a valid attack and a network secure if there are no valid attacks; see also Figure 3. With multiple services, security is an inherently combinatorial (as opposed to binary) notion. For example, the computational problem of checking whether a network is secure is as hard as the (coNP-hard) problem of verifying the expansion of a bipartite graph (see [12]).

(a) A valid attack.
(b) A secure network.
Figure 3: Two restaking networks. Each service (left-hand side vertex) and validator (right-hand side vertex) is labeled with its profit-from corruption or stake, respectively. Assume that a service can be corrupted if and only if it is attacked by at least half of its validators (i.e., αs=1/2 for every service s). The restaking network in (a) is not secure because there is a valid attack (indicated by the dotted line): three validators can earn a profit of 4 by corrupting all four services while losing only three units of stake. The restaking network in (b) is secure.

When is a network “robustly secure,” in the sense that the sudden loss of a small amount of stake cannot enable a catastrophic attack? Unlike the “all-or-nothing” version of this question with a single service, with multiple services, the following more fine-grained question is the appropriate one: given an initial shock in the form of a sudden loss of a ψ fraction of the overall stake, what is the total fraction of stake that might be lost following any consequent valid attacks?

As in the case of a single service, some amount of “buffer” in stake (relative to attack profits) is necessary for robust security. We parameterize this overcollateralization factor via a parameter γ and suppose that, whenever BV is a subset of validators capable of corrupting all the services in AS, the total stake of B is at least a 1+γ factor larger than the total profit from corrupting all of A. For example, this condition holds in the network in Figure 3(b) for γ=1/2 (but not for larger values of γ).

Our first main result (Theorem 4) precisely characterizes the worst-case (over bipartite graphs and shocks, as a function of γ) fraction of the overall stake that can be lost due to a shock of size ψ: (1+1γ)ψ. Because the network was secure prior to the shock, the value of a (1+1γ)ψ fraction of the overall stake is also an upper bound on the total profit obtained from all of the attacked services.

We also show that our result is tight in a strong sense (Theorem 6 and Theorem 7): for every ψ, γ, and ϵ greater than zero such that 0(1+1γ)ψϵ1, there exists a restaking graph in which a ψ fraction of the overall stake can disappear in a shock that results in the loss of at least a (1+1γ)ψϵ fraction of the overall stake.333After slightly reducing the validator stakes, the network in Figure 3(b) already shows that the bound is tight for the special case in which ψ=1/5 and γ is arbitrarily close to 1/2. (Consider a shock that knocks out the validator that is connected to all four services.)

Qualitatively, this result implies that a constant-factor strengthening of the obvious necessary condition for security automatically implies robust security. For example, if attack costs always exceed attack profits by 10%, then a sudden loss of .1% of the overall stake cannot result in the ultimate loss of more than 1.1% of the overall stake.

Our result suggests a “risk measure” that could be exposed to the participants in a restaking protocol, namely the maximum value of the buffer parameter γ that holds with respect to the current restaking network. We also suggest easy-to-check sufficient conditions that can proxy for this risk measure (Corollary 5). These conditions are similarly tight, as shown in Theorem 6 and Theorem 7.

1.4 Our Results: Local Robust Security Guarantees

The results described in Section 1.3 are “global” with respect to the network structure, in three distinct senses: (i) the overcollateralization condition is assumed to hold for every subset BV of validators and AS of services that B is capable of corrupting; (ii) the initial shock can affect any subset of validators, subject to the assumed bound of ψ on the total fraction of stake lost; and (iii) any subset of validators might lose stake following the initial shock, subject to our upper bound of (1+1γ)ψ on the total fraction of lost stake.

Local guarantees

We next pursue more general “local” guarantees, which are parameterized by a set C of services. (The global guarantees will correspond to the special case in which C=S.) For example, C might be a set of closely related services that share a number of dedicated validators. The operators of such a set C might object to both the assumptions and the conclusion of our global guarantee:

  • How can we be sure that the overcollateralization factor holds for services and validators that we know nothing about?

  • And even if we could, how can we be sure that random validators that we have nothing to do with won’t suddenly lose their stake (e.g., because they supported a malicious or buggy service), resulting in an initial shock the causes the loss of more than a ψ fraction of the overall stake?

  • And even if we could, how can we be sure that our validators won’t be the ones that lose their stake following a shock that is purely the fault of other services and/or validators?

To address these concerns, we next consider a refined version of the basic model, parameterized by a set C of services. We denote by Γ(C) the validators that are exclusive to C, meaning that they contribute to no services outside C. (In the special case in which C=S, Γ(C)=V and we recover the original model.) Intuitively, the validators in Γ(C) are the ones that services in C are “counting on”; other validators support (potentially malicious or buggy) services outside C and, from C’s perspective, could disappear at any time. We then restrict attention to initial shocks in which at most a ψ fraction of the total stake controlled by the validators of Γ(C) is lost. (The shock can affect validators outside of Γ(C) arbitrarily.) The goal is then to identify overcollateralization conditions guaranteeing that, no matter what the initial shock and subsequent attacks, the fraction of stake ultimately lost by the validators in Γ(C) is bounded (by a function of the shock size ψ and an overcollateralization parameter γ).

Generalizing our global guarantees to local guarantees is not straightforward and, as the next two examples show, requires additional compromises. The first example shows that protection can be guaranteed only against a subset of valid attacks, a natural and well motivated subset that we call “stable attacks.” The second example shows that, even when restricting attention to stable attacks, overcollateralization is required not only for potential attacks, but more generally for what we call “attack headers.”

Figure 4: Simple overcollateralization is insufficient in the local setting. There are two restaking networks shown above. In each, the validators are denoted along with their corresponding stakes by the yellow circles. Services and their profits from corruption are denoted by the blue rounded squares. In each of these networks, the service outlined in green is overcollateralized. However, despite being unrelated to the overcollateralized service, if the validator outlined in red disappears, the validators in yellow can attack all services (including the overcollateralized one).

Necessity of restricting to stable attacks

In more detail, in the first example (depicted on the left of Figure 4), we consider a restaking network with two services and three validators. (Vertices in this figure correspond to validators, which were previously represented as the right-hand side vertices of a bipartite graph; each service is now identified with its neighborhood of validators.) The service highlighted in green on the left has 100 times as much stake securing it as its profit from corruption, and shares no validators with other services, so one might hope that it would be protected from any shocks that affect only the other validators. However, if the validator outlined in red disappears, the two validators outlined in yellow can attack both services for a profit of 102101>0. But this example is unsatisfying: the validator with stake 1 could have attacked the service with profit from corruption 101 on its own to yield a profit of 100. The addition of the validator with stake 100 added 100 to the cost of the attack, but only added 1 (from the corruption of the service highlighted in green) to the profit. We show in Theorem 8 that the issue suggested by this example is fundamental: without further restrictions on attacks that rule out contrived examples such as this one, there does not exist a local condition that guarantees local security. In response, we confine attention to sequences of what we call stable attacks in which all of the attacking validators contribute positively to the profit of the attack (as opposed to free riding on the profits attributable to other attacking validators). The attack in the example is not stable, as the validator with stake 100 was not a profitable addition to the attack that could have been carried out by the validator with stake 1.

Necessity of overcollateralizing attack headers

Even if we restrict our attention to stable attacks, simple overcollateralization is insufficient to guarantee local security. To appreciate the issue, consider the restaking network shown on the right in Figure 4. In this example, there are three services, each with a profit from corruption of 1, and three services each with stake 1. Each validator is used to secure two different services. The service outlined in green is overcollateralized in that its profit from corruption is 1, but two units of stake stake are securing it. Despite this, if the unrelated validator outlined in red disappears, the two validators outlined in yellow can attack all three services for a profit of 32>0. Furthermore, if all validators are required to attack each service (i.e., αs=1), then this attack is stable, as the inclusion of both validators in the attack is profitable. Thus, despite being overcollateralized, a stable attack can be launched on the service highlighted in green even if stake unrelated to the service disappears in a shock.

We find that to guarantee robust security for a coalition of services C using only “local” information, it is necessary to overcollateralize not only pairs (A,B) where B is a set of validators capable of attacking all services in A, but a more general collection of pairs that we call attack headers. Informally, this amounts to requiring that there is some “buffer” in stake for any potential attack on some services in C even if we were to allow every validator that is also securing a service outside of C to join the attack without considering their profitability. Our main result here (Theorem 10) formally provides a local condition guaranteeing that, whenever an initial shock knocks out at most a ψ fraction of the stake that provides security exclusively to C, the worst-case loss of such stake, after an arbitrary sequence of stable attacks, is at most a (1+1γ)ψ fraction. We show that our bounds are tight and indeed require overcollateralization of all attack headers (Corollary 14 and Theorem 15), and again provide easily computable sufficient (and similarly tight) conditions that can proxy for the overcollateralization condition (Corollary 13). Our local condition generalizes the global overcollateralization condition, with the latter corresponding to the special case of the former in which C=S.

1.5 Our Results: Cascading Attacks

An attack on a restaking network results in a loss of stake (of the attacking validators), and this may introduce new opportunities for other sets of validators to carry out profitable attacks. That is, an initial shock may set off an entire cascade of attacks. (All of the bounds on stake loss described in Sections 1.3 and 1.4 hold for cascades of attacks of arbitrary length.) Our final result concerns the maximum-possible length of such a cascade, and shows that this quantity is also governed in part by the overcollateralization factor γ.

Precisely, we define the reference depth of a cascade of attacks as a measure of the “long-range dependence” between different attacks in an attack sequence. For example, if each attack in the sequence is directly enabled by the loss of the validators slashed in the previous attack, then the reference depth of the sequence is 1. Our main result here (Theorem 16) bounds the maximum-possible attack length as a function of the reference depth k, the shock size ψ, the overcollateralization factor γ, and the minimum stake ϵ held by a validator: k(1+log1+γ(ψ[total stake]ϵγ)). For example, in the case of constant reference depth and equal validator stake amounts, the worst-case attack length is logarithmic in the number of validators with overcollateralization and linear without it.

1.6 Related Work

Our focus on the risks of cascading failures following a small shock echoes some of themes in the literature on systemic risk in financial networks. For example, Eisenberg and Noe [9] that study the existence and structure of inter-firm payments in a financial network following a default. This work is extended by Glasserman and Young [11], who study how “connectedness,” meaning the fraction of liabilities that a firm externally owes, affects contagion risk. Acemoglu et al. [2] build further on this work and study network “stability,” meaning the propensity for shocks to propagate; they show that connectivity initially improves stability, but then at a phase transition, denser connectivity leads to increased shock propagation. In a subsequent paper, Acemoglu et al. [1] unify a number of the preceding results.

A separate line of work, beginning with Chen et al. [6], aims to axiomatically characterize systemic risk measures. The model in [6] can capture, in particular, a contagion model characterized by a matrix of profits and losses over different firms and outcomes. The results in [6] characterize the global measures of risk (operating on the matrix) that satisfies certain sets of desirable axioms. This is work is expanded upon in Kromer et al. [13], where the authors consider general outcome measure spaces, as well as in Feinstein et al. [10], where the authors consider set-valued risk measures. Battison et al.[4] study systemic risk measurement when a regulator has limited information about contracts made between financial network participants (e.g., the fraction of the face value that can likely be recovered if a counterparty defaults), and show how small errors in knowledge can lead to large errors in systemic risk measurement.

Our work also shares some conceptual similarity with the well-known work of Diamond and Dybvig [8] on bank runs and of Brunnermeier et al. [5] on the risks of re-hypothecation.

The model in the present work differs substantially from those considered in the aforementioned papers, in large part because of the idiosyncrasies of restaking networks, including their combinatorial and bipartite nature and their susceptibility to economically motivated attacks.

Restaking has also, to a limited extent, been studied in its own right. The EigenLayer team introduces restaking in [15]. The framing of an economically motivated attacker that trades off stake loss with profits from corruption appears in [15], and is used also by Deb et al. [7]. Chitra and Neuder [14] discuss restaking risk from a validator perspective, comparing restaking with investments in other financial instruments (e.g. bonds). Alexander [3] considers the interplay between existing leveraging schemes and liquid restaking tokens, which can amplify large-scale credit risk.

2 Model

Validators and Services

We consider a setting in which there is a set V of validators and a set S of services. Each service sS has some profit from corruption πs, and each validator vV has some stake σv. We also associate with each service s a parameter αs that denotes the fraction of stake required to corrupt/launch an attack on s. We call a bipartite graph G=(S,V,E,π,σ,α) a restaking graph; an edge is drawn between a validator vV and a service sS if v is restaking for s. For a given set of vertices A in a graph G, we use the notation NG(A) to denote the neighbors of A.

Attack Dynamics

For simplicity, we assume that validators lose their full stake σv if they launch an attack on a service. As such, for a given collection of services AS, and a given collection of validators BV restaking for those services, we say that (A,B) is an attacking coalition for a restaking graph G if the validators in B possess enough stake to corrupt the services A:

vBNG({s})σvTotal stake in s owned by validators BαsvNG({s})σvTotal amount restaked in s sA (1)

We further say that (A,B) is a valid attack if it is an attacking coalition that has an incentive to launch an attack:

sAπsTotal profit from corrupting A>vBσvTotal stake owned by validators B (2)

If a valid attack (A,B) is carried out, we denote by GB the induced subgraph G[S,VB]. The graph GB denotes the state of the restaking graph after the attack is carried out. If no valid attacks exist on the graph, we call it secure. To simplify notation, for any subset of validators BV, we will use the shorthand σB to denote vBσv. Similarly, for any AS, we will use πA as shorthand for sAπs.

EigenLayer sufficient conditions

We note in passing that, in their whitepaper [15], EigenLayer proposes some efficiently verifiable sufficient conditions for security that they check to ensure that an attack does not exist.

Claim 1 (EigenLayer Sufficient Conditions, from Appendix B.1 of the EigenLayer Whitepaper [15]).

A restaking graph G is secure if for each validator vV,

sNG{v}σvσNG({s})πsαsσv (3)
Proof.

It is shown in Appendix B.1 of the whitepaper that if Equation 3 holds for G, then the graph is secure (i.e. no valid attacks (A,B) satisfying Equations 1 and 2 exist).

Cascading attacks

Our goal is to understand when a small shock can result in the loss of a large fraction of the overall stake. Small shocks can turn into large shocks by means of a cascading attack. Formally, we say that a disjoint sequence (A1,B1),,(AT,BT)2S×2V is a valid cascade of attacks on a restaking graph G=(S,V,E,π,σ,α) if for each t[T], (At,Bt) is a valid attack on Gi=1t1Bi. We denote by 𝒞(G) the set of all such sequences of valid cascading attacks.

Worst-case stake loss

We now define a metric that measures the total potential loss of stake due to a sequence of cascading attacks. In our model, we first suppose that an initial small shock decreases the amount of stake. Formally, we define, for a given restaking graph G=(S,V,E,π,σ,α),

𝔻ψ(G):={DVσDσVψ} (4)

to be the set of validator coalitions that constitute at most a ψ-fraction of all stake. Given some D𝔻ψ(G), we use the notation GD:=G[S,VD] to denote the induced subgraph of the restaking graph when we delete the validators D. We now define

Rψ(G):=ψInitial shock+maxD𝔻ψ(G)max(A1,B1),,(AT,BT)𝒞(GD)σt=1TBtσVStake lost from cascading attacks (5)

This quantity represents the worst-case total fraction of stake lost due to an initial ψ-fraction of the stake disappearing. By construction, ψRψ(G)1.

3 Overcollateralization Provides Robust Security

In this section, we show that “scaling up” the definition of security automatically results in robust security, meaning bounded losses from cascading attacks that follow an initial shock. We first show that, without loss of generality, it suffices to consider single valid attacks (A,B)𝒞(GD) instead of more general cascading attacks.

Lemma 2.

Let G=(S,V,E,π,σ,α) be an arbitrary restaking graph, and further suppose that (A,B) is an attacking coalition on GD, where DV. Then, (A,BD) is an attacking coalition on G.

Proof.

Because (A,B) is an attacking coalition on GD, we must have by Equation 1 that

σBNG{s}αsσNG{s}D sA (6)

It follows that for any sA,

σ(BD)NG{s} =σBNG{s}+σDNG{s} (7)
αsσNG{s}D+σDNG{s} (8)
αsσNG{s} (9)

and the desired result follows.

Corollary 3.

Let G=(S,V,E,π,σ,α) be an arbitrary restaking graph, and further suppose that (A1,B1),,(AT,BT)𝒞(G) is a valid sequence of cascading attacks on G. Then, (t=1TAt,t=1TBt) is also a valid attack on G.

Proof.

By repeatedly applying Lemma 2, we find that for each t[T], (At,i=1tBt) is an attacking coalition on G. It follows by inspection of Equation 1 that we must therefore have that (tAt,tBt) is an attacking coalition on G. To finish the result, it suffices to show that Equation 2 holds for this attacking coalition on the original graph G. This follows from the disjointness of the At’s and of the Bt’s:

πtAt=t=1πAt>t=1TσBt=σtBt (10)

where in the inner inequality we use that for each t[T], πAt>σBt by Equation 2, as (At,Bt) is a valid attack on Gi=1t1Bi. It follows that (tAt,tBt) is a valid attack on G.

Adding Multiplicative Slack

Our condition is given by adding multiplicative slack to Equation 2. Formally, we say that a restaking graph G is secure with γ-slack if for all attacking coalitions (A,B) on G,

(1+γ)πATotal profit from corrupting AσBTotal stake owned by validators B (11)
Theorem 4.

Suppose that a restaking graph G=(S,V,E,π,σ,α) is secure with γ-slack for some γ>0. Then, for any ψ>0, Rψ(G)<(1+1γ)ψ.

Proof.

Take any ψ>0 and any D𝔻ψ(G) for some restaking graph G where (11) holds. Let (A1,B1),(AT,BT)𝒞(GD) be arbitrary. Applying Corollary 3, we must have that (tAt,tBt)𝒞(GD) as well. Defining A:=tAt and B:=tBt, we must therefore have that (A,B) is an attacking coalition on GD, and furthermore that

πA >σB (12)

By Lemma 2, we must also have that (A,BD) is an attacking coalition on the original graph G. It then follows that as G is secure with γ-slack, Equation 11 must hold on (A,BD), whence

(1+γ)πAσBD=σB+σD (13)

Putting this together with Equation 12, we find that

(1+γ)σB<(1+γ)πAσB+σDσB+ψσV (14)

It follows that

γσB<ψσV σBσV<ψγ (15)
ψ+σBσV<(1+1γ)ψ (16)

As we took A1,,AT to be arbitrary, we find that Rψ(G)<(1+1γ)ψ, as desired.

The EigenLayer sufficient conditions (3) can be similarly “scaled up” to yield efficiently checkable sufficient conditions for security with γ-slack (and hence, by Theorem 4, robustness to cascading attacks).

Corollary 5.

Let G be a restaking graph such that, for all validators vV,

sNG({v})σvσNG({s})(1+γ)πsαsσv (17)

Then, Rψ(G)<(1+1γ)ψ.

Proof.

This follows from Theorem 4 and 1. Noting that the γ-slack condition holds precisely iff no valid attacks exist when profits from corruption πs are inflated by a multiplicative factor of (1+γ), it suffices to apply EigenLayer’s sufficient conditions from 1 with modified profits from corruption (1+γ)πs. Note that, given a restaking network, it is straightforward to compute the minimum value of γ such that the condition in (17) holds. This value can then be interpreted as an easily computed “risk measure” of such a network.

4 Lower Bounds for Global Security

In this section, we show that the upper bounds from the previous section are tight. We first show that if there is no multiplicative slack (i.e. γ=0), then very small shocks can cause all stake to be lost in the worst case. This holds even under the EigenLayer conditions (3)444The graph exhibited in the proof of Theorem 6 has πx/σa as ϵ0. It is possible to construct a counterexample with similar properties while maintaining that πs/σv is greater than some universal constant for any sS and vV. This is done in the full version of the paper..

Theorem 6.

For any 0<ϵ<1, there exists a restaking graph G that is secure and meets the EigenLayer condition (3), but has Rψ(G)=1 for all ψϵ.

Proof.

We construct a restaking graph G=(S,V,E,π,σ,α) with one service S={x} and two validators V={a,b}, where an edge exists between each validator and the service (i.e. E:=({x},{a}),({x},{b})). We then let σa:=ϵ, σb:=1ϵ, πx:=1, and αx:=1. Without loss of generality, we may assume ψ<1 as R1(G)=1 for any restaking graph G. This graph satisfies (3) as

σaσa+σbπxαx=σa (18)
σbσa+σbπxαx=σb (19)

whence the graph is also secure by 1. We now consider an initial shock D={a}. As σa/σV=ϵψ, it follows that DDψ(G). The pair ({x},{b}) is a valid attack on GD, since {b}=NGD{x} whence it is an attacking coalition, and σb<πx. It follows that Rψ(G)σa+σbσV=1 as desired.

Next, we show that the bound we give in Theorem 4 (indeed, more strongly, the condition given in Corollary 5) is tight for all ψ,γ>0.

Theorem 7.

For any ψ,γ,ϵ>0 such that

0(1+1γ)ψϵ1, (20)

there exists a restaking graph G that satisfies the condition (17) from Corollary 5 but has Rψ(G)(1+1γ)ψϵ.

Proof.

We construct a restaking graph G=(S,V,E,π,σ,α) with three validators V={a,b,c} and one service S={x}, where an edge exists between each of the validators a and b and the service x (i.e. the edge set E:={(x,a),(x,b)}). Without loss of generality, suppose that ϵψ/γ. Let σa>0 be any positive constant. We define

σb :=σa(1γϵψ) (21)
σc :=σa(1ψ+ϵψ1γ) (22)
πx :=(1+1γ)ψϵ1+γσV (23)
αs :=1 (24)

Notice first that σb0 as we have taken ϵψ/γ. Next, observe that

σc0 1ψ+ϵψ1γ (25)
1(1+1γ)ψϵ (26)

whence σc0 by Equation 20. Finally, we must also have that πx0 by Equation 20 as well. Next, notice by construction that

σV =σa+σb+σc=σa(1+1γϵψ+1ψ+ϵψ1γ)=σaψ (27)

This graph meets condition (17) from Corollary 5, as

sNG{a}σaσNG{s}(1+γ)πs =σaσa+σb[(1+1γ)ψϵ]σV (28)
=σaσa(1+1γϵψ)[(1+1γ)ψϵ]σaψ (29)
=σa (30)

A similar argument shows that

sNG{b}σbσNG{s}(1+γ)πs=σbσa[(1+1γ)ψϵ][(1+1γ)ψϵ]σa=σb (31)

Finally, as the validator c has no neighbors, it also satisfies the condition, whence the graph indeed satisfies (17). We now consider an initial shock D:={a} to the graph. Because

σaσV=σaσa/ψ=ψ, (32)

the shock D𝔻ψ constitutes a ψ fraction of the total stake as desired. The attack ({x},{b}) is a valid attack on GD. To see this, notice first that ({x},{b}) is an attacking coalition on GD as {b}=NGD{x}. Furthermore, {b} is incentivized to attack since

πxσb =(1+1γ)ψϵ1+γσVσb (33)
=[1+1γϵψ1+γ(1γϵψ)]σa (34)
=[(1+1γ)ψ+γϵ(1+γ)ψ1γ]σa (35)
=γϵσa(1+γ)ψ>0 (36)

whence Equation 2 is satisfied for the pair ({x},{b}). It follows that

Rψ(G)σa+σbσV=(1+1γϵψ)σaσa/ψ=(1+1γ)ψϵ (37)

as desired.

5 Local Security

The bound in Theorem 4 on the worst-possible stake loss from cascading attacks is reassuring from a global perspective, but less so from the perspective of one or a small number of services who would like an assurance that they will not be among those affected by such attacks. Beginning with this section, we focus on a specific coalition of services CS that seeks to insulate their shared security Γ(C) against shocks and resulting cascading attacks that may come about due to the decisions of other services and validators. Formally, we denote by

Γ(C):={vVNG{v}C} (38)

the set of validators that exclusively provide security for services in C.

Worst-case stake loss (local version)

As before, we first suppose that an initial shock affects the restaking graph. Whereas previously, we considered shocks for which the total stake in the shock was bounded, we now consider shocks for which the total stake that impacts the exclusive security of some coalition of services C (i.e., stake that secures services from C and only services from C) is bounded. Formally, for any coalition of services CS within some restaking graph G, we let

𝔻ψ(C,G):={DVσDΓ(C)σΓ(C)ψ} (39)

denote the set of all validator coalitions that provide at most ψ stake to the aggregate security of the coalition of services C. Notice that shocks D𝔻ψ(C,G) may have much more total stake σD than a ψ fraction of the graph. We are instead only guaranteed that the impact of the shock on stake that is being used exclusively for members in C is small. We are now interested in the potential cascading losses that can affect the stake that is exclusively utilized by the coalition C after a shock occurs that destroys at most ψ stake from the aggregate security of C. Formally, we study the quantity

Rψ(C,G):=ψInitial Shock+maxD𝔻ψ(C,G)max(A1,B1),,(AT,BT)𝒞(GD)σt=1TBtΓ(C)σΓ(C)Stake lost from cascading attacks (40)

Local security conditions

We seek sufficient conditions on a restaking graph G that guarantee a nontrivial upper bound on Rψ(C,G). Ideally, the sufficient condition would depend only the neighborhood/choices of the coalition C to provide a guarantee that holds regardless of the choices made by other validators and services (i.e., the services of C can attempt to “control their own destiny” by ensuring that the locally defined sufficient condition holds). Formally, given a restaking graph G=(S,V,E,π,σ,α) and a coalition of services CS, we call a restaking graph G=(S,V,E,π,σ,α) a C-local variant of G if C cannot distinguish G from G on the basis of local information: CS, NGC=NGC, and

(πs,αs) =(πs,αs) sC (41)
(σv,NG{v}) =(σv,NG{v}) vNGC (42)

We then define a local security condition f:(C,G){0,1} to be a Boolean function that takes as input a restaking graph G and a coalition of services CS such that f(C,G) must be equal to f(C,G) for all C-local variants G. The intuition behind this definition is that the condition should only depend on service-level information (e.g. profits from corruption, security thresholds) for services in the coalition, and validator-level information for validators in NGC.

Local security impossibility

Unfortunately, without further restrictions on the attacks under consideration (like those defined later in this section), it is impossible to construct any nontrivial local security condition that yields any nontrivial upper bound on Rψ(C,G).

Theorem 8.

For any local security condition f, any secure restaking graph G and coalition of services CS such that f(C,G)=1, there exists a secure C-local variant G of G such that R0(C,G)=1.

Proof.

Take any f, C, and secure G such that f(C,G)=1. Define

Δ:=σNGCπC (43)

to be the total overcollateralization of C in aggregate. Next, we define G to be an augmented version of G, where we add a new service s that has a profit from corruption πs=Δ+2ϵ where ϵ>0. We further add 2 validators a and b to the graph who are adjacent only to s. We let σa:=Δ+ϵ and σb:=ϵ. As σa+σbπs, the graph G must be secure as G was secure. Next, notice that as the validator a is not path-connected to C, G must be a C-local variant of G. However, by construction, the attack (C{s},NGC{b}) is valid on the graph G{a}. It follows that R0(C,G)=1.

Stable attacks

While the above impossibility appears to be quite strong, it is somewhat contrived. At the heart of the impossibility is that under the definition of a valid attack (i.e. Equations 2 and 1), not every validator must be productive in carrying out the attack. There may be a subset of validators in the attack that can yield more net profit than the full coalition. In what follows, we show that if we assume that malicious validator coalitions will choose to add others to their ranks only if it is profitable for them in net to do so, then a local security condition with guarantees similar to those in Theorem 4 does indeed exist. Formally, for AS and BV, we say that an attack (A,B) is stable if it is valid (i.e. Equations 2 and 1 hold), and for all AA and BB such that (A,B) is valid,

σBB<πAA (44)

Intuitively, if (44) did not hold, then the validators of B would be better off ditching those in BB and attacking only the services in A.

We say that a disjoint sequence (A1,B1),,(AT,BT)2S×2V is a cascade of stable attacks on a restaking graph G=(S,V,E,π,σ,α) if for each t[T], (At,Bt) is a stable attack on Gi=1t1Bi. We denote by 𝒮(G) the set of all such sequences of cascading stable attacks. In light of Theorem 8, we redefine our notion of worst-case stake loss using stable attacks:

Rψ(C,G):=ψ+maxD𝔻ψ(C,G)max(A1,B1),,(AT,BT)𝒮(GD)σtBtΓ(C)σΓ(C) (45)

Unlike valid attacks, unions of sequences of stable cascading attacks need not be stable (which will complicate the proof of Theorem 10 in the next section).

Claim 9.

There exists a restaking graph G and a sequence (A1,B1),(A2,B2)𝒮(G) such that (A1A2,B1B2)𝒮(G).

Proof.

Consider restaking graph where there are two services S={x,y} and two validators V={a,b} where both validators are restaking in both services. Furthermore, πx=πy=2, αx=αy=12, and σa=σb=1. In this case, the sequence ({x},{a}),({y},{b})𝒮(G). However, ({x,y},{a,b})𝒮(G) because the attack ({x,y},{a}) is a valid attack.

6 A Local Security Condition for Stable Attacks

In this section, we give a family of local security conditions that yield guarantees on the local worst-case stake loss Rψ(C,G) that resemble our result from Theorem 4 for global security. In Theorems 4 and 7, we showed that security with γ-slack was necessary and sufficient in order to obtain a (1+1γ)ψ upper-bound on Rψ(G). In other words, it was both necessary and sufficient to ensure that all attacking coalitions (A,B) were overcollateralized multiplicatively by a factor of (1+γ). Our condition for local security is similar: we must ensure that certain attack headers (X,Y) are overcollateralized multiplicatively by a factor of (1+γ).

Attack header

Formally, we say that (X,Y) is an attack header, for XS and YΓ(X), if there exists a set of validators BV satisfying

BΓ(X)= (46)

such that (X,BY) is an attacking coalition. In other words, (X,Y) is an attack header if Y can be appended to a collection of validators B that may be slashed without attacking services in X to form an attacking coalition that attacks the services X. An attacking coalition is a special case of an attack header (in which B can be taken as ).

Theorem 10.

Let G=(S,V,E,π,σ,α) be a restaking graph and CS be a coalition of services. If, for all attack headers (X,Y) where XC,

(1+γ)πXσY (47)

then Rψ(C,G)<(1+1γ)ψ. Furthermore, the Boolean function that checks whether Equation 47 holds for all attack headers is a local security condition.

Proof.

Let D𝔻ψ(C,G) and (A1,B1),,(AT,BT)𝒮(GD) be arbitrary. For each t[T], define

Lt:=BtΓ(C) (48)

to be the set of all validators exclusively securing C that were lost in the tth attack. Next, define

At:={sAtσBtLtαsσNG{s}(i=1t1BiD)} (49)

to be the maximal set of services such that (At,BtLt) is an attacking coalition on G(i=1t1BiD). Notice also that because (At,Bt) is a stable attack on G(i=1t1BiD), we must have that AtAt is nonempty, and furthermore must be a subset of C.

Claim 11.
σtBtΓ(C)<πtAtAt (50)
Proof.

From stability, we have that

σLt=σBt(BtLt)<πAtAt (51)

To see why this holds, notice that there are two cases. If the attacking coalition (At,BtLt) is a valid attack, then the inequality follows directly from the stability definition. If instead (At,BtLt) is not a valid attack despite being an attacking coalition, the inequality must still hold because the original attack (At,Bt) is valid and therefore satisfies Equation 2. Iterating over t, we find that

σtBtΓ(C)=t=1TσLt<t=1TπAtAt=πtAtAt (52)

Claim 12.

(t(AtAt),(tBtD)Γ(C)) is an attack header on G, and t(AtAt)C.

Proof.

Because each AtAtC, we must also have that tAtAtC as well. Next, by applying Corollary 3 noting the disjointness of the (At,Bt), we have that

(tAt,tBt)=(t(AtAt)tAt,tBt) (53)

must be an attacking coalition on GD. In particular, we must have that (t(AtAt),tBt) is an attacking coalition on GD, whence by Lemma 3 we have that (t(AtAt),tBtD) is an attacking coalition on G. Rewriting the above as

(t(AtAt),[(tBtD)Γ(C)][(tBtD)Γ(C)]) (54)

and noting by construction that

[(tBtD)Γ(C)]Γ(C)=, (55)

we find that (t(AtAt),(tBtD)Γ(C)) is an attack header on G as desired. Putting these claims together yields the desired result. By Claim 12 and Equation 47, we find that

(1+γ)πtAtAtσ(tBtD)Γ(C)σtBtΓ(C)+ψσΓ(C) (56)

Adding in Claim 11, we find that

(1+γ)σtBtΓ(C) <σtBtΓ(C)+ψσΓ(C) (57)
σtBtΓ(C)σΓ(C)<ψγ (58)
Rψ(C,G)<(1+1γ)ψ (59)

as desired. To see that the Boolean function that checks whether Equation 47 holds for all attack headers is a local security condition, observe that it suffices to check that for all XC and YNGCΓ(C)=Γ(C) such that (X,YNGCΓ(C)) is an attacking coalition, Equation 47 holds. Thus, for any restaking graph G and C-local variant G, this function will evaluate to the same output.

The condition in (47) can be checked via enumeration, although the time required to do so grows exponentially in the number of services in C and the number of validators that contribute security exclusively to services in C. In the event that this is a prohibitive amount of computation, the same guarantee holds under a stronger, easily checked local analog of the EigenLayer sufficient condition (3) which, in effect, treats as malicious all validators that contribute security to any services outside of C.

Corollary 13.

For any restaking graph G=(S,V,E,π,σ,α) and coalition of services CS, satisfaction of the local condition

sNG{v}σvσNG({s})(1+γ)πsαsσv (60)

for all vNGC with NG{v}C guarantees that Rψ(C,G)<(1+1γ)ψ, where for each sC,

αs:=αsσNG{s}Γ(C)σNG{s} (61)
Proof.

This follows from Theorem 10 and 1. Notice that to guarantee that the condition (47) from Theorem 10 holds for all attack headers (X,Y), it suffices to guarantee that no valid attacks exist, when (i) all profits from corruption for services in C are inflated by a multiplicative factor of (1+γ), and (ii) the fraction of stake required to corrupt a given service is offset by the fraction of stake in that service that is also restaking for services that do not belong to C. As 1 provides a sufficient condition to guarantee the existence of no valid attacks, Equation 60 will guarantee that all attack headers are overcollateralized by a multiplicative factor of (1+γ).

7 Lower Bounds for Local Security

We next show senses in which Theorem 10 (and more strongly, Corollary 13) is tight.

Corollary 14.

For any ψ,γ,ϵ>0 such that

0(1+1γ)ψϵ1, (62)

there exists a restaking graph G that satisfies the condition (60) from Corollary 13 but has Rψ(C,G)(1+1γ)ψϵ.

Proof.

Notice that if we take C=S, the condition (60) from Corollary 13 is identical to the condition (17) from Corollary 5. Furthermore, Rψ(S,G) is the same as Rψ(G) except for the fact that Rψ(S,G) considers only stable attacks. Repeating the argument from Theorem 7, and noting that the attack given in the proof of that result is stable, we obtain the desired result.

Theorem 15.

For any γ>0, there exists a graph restaking graph G=(S,V,E,π,σ,α) that satisfies (17) from Corollary 5, but there exists a CS such that R0(C,G)=1.

Proof.

Let there be three services S={x,y,z}, three validators V={a,b,c}, and edges as follows:

E:={(x,a),(x,b),(y,b),(y,c),(z,c),(z,a)}. (63)

Next, let αs=1 for all sS, and let πx=πy=πz=:π>0 be an arbitrary positive constant. Next pick σa such that

σa<2π, (64)

let σb:=2(1+γ)π, and let σc:=2(1+γ)π. This graph satisfies (17) from Corollary 5 as

σaσa+σb(1+γ)π+σaσa+σc(1+γ)π<(1+γ)π(1σb+1σc)σa=σa (65)
σbσa+σb(1+γ)π+σbσb+σc(1+γ)π<32(1+γ)π<σb (66)
σcσa+σc(1+γ)π+σcσb+σc(1+γ)π<32(1+γ)π<σc (67)

Next, let C:={x,z}, and observe that the shock D={b,c} is an element of 𝔻0(C,G). However, because σa<πx+πz=2π, we must have that ({x,z},{a}) is a stable attack on GD. As {a}=Γ(C), it follows that R0(C,G)=1.

8 Long Cascades

Cascade Structure

Although Corollary 3 shows that all long cascades can be made into a short, one-step attack, it is arguably more dangerous if large attacks can be made through long cascades of small attacks that each require less coordination. In this section, we show how adding γ-slack also enables us to upper-bound the length of a cascade in the worst case. Our results depend on what we call the reference depth of a cascading sequence of attacks. Formally, for a given restaking graph G and coalition of validators B0, we say that a valid cascade of attacks (A1,B1),,(AT,BT) on 𝒞(GB0) has reference depth k if

k=max{i[T]t[T] s.t. NG(At)Bti} (68)

In other words, a cascading sequence has reference depth k if the services attacked in a given time step are not affected by validators that were slashed more than k steps previous.

Theorem 16.

Suppose that a restaking graph G=(S,V,E,π,σ,α) is secure with γ-slack for some γ>0. Let ϵ=minvVσv denote the minimum stake held by a validator. Then, for any ψ>0, B0𝔻ψ(G), and (A1,B1),,(AT,BT)𝒞(GB0) with reference depth k,

T<k(1+log1+γψσVϵγ) (69)
Proof.

Because (A1,B1),,(AT,BT)𝒞(GB0) has reference depth k, it follows that if we define

Ai :=t=k(i1)+1kiAt (70)
Bi :=t=k(i1)+1kiBt (71)
B0 :=B0 (72)

for i{1,,T/k}, we must have that for all j<i1,

NGAiBj= (73)

Furthermore, by Corollary 3, we must have that the sequence (Ai,Bi)𝒞(GB0). Applying Corollary 3 again, we further have that for every i, (j=i+1T/kAi,j=i+1T/kBi) is a valid attack on Gj=0iBi. It follows by Equation 2 that for any i{0,,T/k1},

πj=i+1T/kAj>σj=i+1T/kBj=j=i+1T/kσBj (74)

Next, noting that

NGj=i+1T/kAij=1i1Bi= (75)

by Equation 73, it follows that indeed (j=i+1T/kAi,j=i+1T/kBi) is also a valid attack on GBi. Applying Lemma 2, we then find that (j=i+1T/kAi,Bij=i+1T/kBi) is an attacking coalition on the original graph G. Because G is secure with γ-slack, Equation 11 must now hold on this pair, whence for all i{0,,T/k1},

(1+γ)πj=i+1T/kAjσj=iT/kBj=σBi+j=i+1T/kσBj (76)

Putting this together with Equation 74, we have that for all i{0,,T/k1},

(1+γ)j=i+1T/kσBj<σBi+j=i+1T/kσBj (77)
σBi>γj=i+1T/kσBj (78)

It can be shown inductively from Equation 78 that the sequence σBi>Xi, where we define the sequence Xi by

XT/k :=ϵ (79)
Xi :=γj=i+1T/kXj i{0,,T/k1} (80)

Solving, we find that

X0=ϵγ(1+γ)T/k1 (81)

It then follows as desired that

ψσV=σB0>ϵγ(1+γ)T/k1T<k(1+log1+γψσVϵγ) (82)

References

  • [1] Daron Acemoglu, Asuman Ozdaglar, and Alireza Tahbaz-Salehi. Networks, shocks, and systemic risk. Technical report, National Bureau of Economic Research, 2015.
  • [2] Daron Acemoglu, Asuman Ozdaglar, and Alireza Tahbaz-Salehi. Systemic risk and stability in financial networks. American Economic Review, 105(2):564–608, 2015.
  • [3] Carol Alexander. Leveraged restaking of leveraged staking: What are the risks? Available at SSRN 4840805, 2024.
  • [4] Stefano Battiston, Guido Caldarelli, Robert M May, Tarik Roukny, and Joseph E Stiglitz. The price of complexity in financial networks. Proceedings of the National Academy of Sciences, 113(36):10031–10036, 2016.
  • [5] Markus K Brunnermeier, Gary Gorton, and Arvind Krishnamurthy. Risk topography. Nber macroeconomics annual, 26(1):149–176, 2012.
  • [6] Chen Chen, Garud Iyengar, and Ciamac C Moallemi. An axiomatic approach to systemic risk. Management Science, 59(6):1373–1388, 2013. doi:10.1287/MNSC.1120.1631.
  • [7] Soubhik Deb, Robert Raynor, and Sreeram Kannan. Stakesure: Proof of stake mechanisms with strong cryptoeconomic safety. arXiv preprint, 2024. doi:10.48550/arXiv.2401.05797.
  • [8] Douglas W Diamond and Philip H Dybvig. Bank runs, deposit insurance, and liquidity. Journal of political economy, 91(3):401–419, 1983.
  • [9] Larry Eisenberg and Thomas H Noe. Systemic risk in financial systems. Management Science, 47(2):236–249, 2001. doi:10.1287/MNSC.47.2.236.9835.
  • [10] Zachary Feinstein, Birgit Rudloff, and Stefan Weber. Measures of systemic risk. SIAM Journal on Financial Mathematics, 8(1):672–708, 2017. doi:10.1137/16M1066087.
  • [11] Paul Glasserman and H Peyton Young. Contagion in financial networks. Journal of Economic Literature, 54(3):779–831, 2016.
  • [12] Subhash Khot and Rishi Saket. Hardness of bipartite expansion. In 24th Annual European Symposium on Algorithms (ESA 2016). Schloss-Dagstuhl – Leibniz Zentrum für Informatik, 2016. doi:10.4230/LIPIcs.ESA.2016.55.
  • [13] Eduard Kromer, Ludger Overbeck, and Katrin Zilch. Systemic risk measures on general measurable spaces. Mathematical Methods of Operations Research, 84:323–357, 2016. doi:10.1007/S00186-016-0545-1.
  • [14] Tarun Chitra Mike Neuder. The risks of lrts, 2024. URL: https://ethresear.ch/t/the-risks-of-lrts/18799.
  • [15] EigenLayer Team. Eigenlayer: The restaking collective, 2023. URL: https://docs.eigenlayer.xyz/overview/whitepaper.