Robust Restaking Networks
Abstract
We study the risks of validator reuse across multiple services in a restaking protocol. We characterize the robust security of a restaking network as a function of the buffer between the costs and profits from attacks. For example, our results imply that if attack costs always exceed attack profits by 10%, then a sudden loss of .1% of the overall stake (e.g., due to a software error) cannot result in the ultimate loss of more than 1.1% of the overall stake. We also provide local analogs of these overcollateralization conditions and robust security guarantees that apply specifically for a target service or coalition of services. All of our bounds on worst-case stake loss are the best possible. Finally, we bound the maximum-possible length of a cascade of attacks.
Our results suggest measures of robustness that could be exposed to the participants in a restaking protocol. We also suggest polynomial-time computable sufficient conditions that can proxy for these measures.
Keywords and phrases:
Proof of stake, Restaking, Staking RisksCopyright and License:
![[Uncaptioned image]](x1.png)
2012 ACM Subject Classification:
Networks Network economicsAcknowledgements:
We thank Tarun Chitra, Soubhik Deb, Sreeram Kannan, Mike Neuder, and Mallesh Pai for comments on earlier drafts of this paper. We thank Soubhik and Sreeram in particular for emphasizing the importance of local guarantees.Funding:
This research was supported in part by NSF awards CCF-2006737 and CNS-2212745, and research awards from the Briger Family Digital Finance Lab and the Center for Digital Finance and Technologies.Editors:
Raghu MekaSeries and Publisher:

1 Introduction
1.1 Sharing Validators Across Services
Major blockchain protocols such as Bitcoin or Ethereum are “decentralized,” meaning that transaction execution is carried out by a large and diverse set of “validators.” Such protocols offer a form of “trusted computation,” in the sense that, because they are decentralized, no one individual or entity can easily interfere with their execution. A decentralized and Turing-complete smart contract platform such as Ethereum can then be viewed as a trusted programmable computer capable of performing arbitrary computations.
While Turing-complete, the computing functionality offered by Ethereum smart contracts suffers from limitations imposed by design decisions in the underlying consensus protocol. Most obviously, computation and storage in the Ethereum virtual machine is scarce, with perhaps 15–20 transactions processed per second. Could the Ethereum protocol be somehow bypassed, opening the door for different or more powerful computing functionality, while retaining at least some of the protocol’s decentralization? Or, what about applications that are not compatible with all Ethereum validators, perhaps due to demanding hardware requirements or regulatory constraints?
One natural approach to addressing these challenges is to allow the reuse of a blockchain protocol’s validators across multiple services, where a “service” is some task that could be carried out by some subset of validators. (The initial blockchain protocol can be viewed as the canonical service, performed by all validators.) For example, such services could include alternative consensus protocols (perhaps with higher throughput, a different virtual machine, or different consistency-liveness trade-offs), storage (“data availability”), or verifiable off-chain computation (“zk coprocessors”).111Several projects, in various stages of production, are currently exploring this idea; the Eigenlayer restaking protocol [15] is perhaps the most well known of them. We stress that our goal here is to develop a model that isolates some of the fundamental challenges and risks of validator reuse, independent of any specific implementation of the idea.
The obvious danger of validator reuse is an increased risk of a validator deviating from its intended behavior (e.g., due to overwhelming computational responsibilities). Our focus here is deliberate validator deviations in response to economic incentives, such as the profits that could be obtained by corrupting one or more services. The goal of this paper is to quantify such risks:
Under what conditions can validators be safely reused across multiple services?
1.2 Cryptoeconomic Security
We first review the usual “cryptoeconomic” approach to answering a more basic question: when is a blockchain protocol, without any additional services, “safe from attack”? The idea is to perform a cost-benefit analysis from the perspective of an attacker, and declare the protocol safe if the cost of carrying out an attack exceeds the profit that the attacker can expect from it; see also Figure 1. In the specific case of a proof-of-stake blockchain protocol with slashing (such as Ethereum), the cost can be estimated as the value of the validator stake that would be lost to slashing following an attack. For example, let denote the set of validators of a proof-of-stake blockchain protocol and the stake of validator (e.g., 32 ETH in Ethereum). In a typical PBFT-type protocol in which double-voting validators lose all of their stake, the cost of an attack (i.e., causing a consistency violation) can be bounded below by . In this case, the protocol can be regarded as cryptoeconomically secure provided the estimated profit of an attack is less than this quantity. To the extent that there is a “buffer” between and , the protocol can be treated as “robustly secure,” meaning secure even after a sudden loss of some amount of stake (e.g., due to slashing caused by a software error).
1.3 Our Results: Robustly Secure Validator Reuse
Viewing the basic scenario of a blockchain protocol as a star graph (with one “service” representing the protocol connected to all the validators running it), the more complex scenario of validators reused across multiple services can be viewed as an arbitrary bipartite graph (Figure 2). As before, we suppose that each validator has some stake that can be confiscated in the event that participates in an attack on a service that it has agreed to support. There is now a set of services, with denoting the profit an attacker would obtain by compromising the service .222We follow [15] and assume that the estimates on attack profitability (the ’s) are given. Developing tools to help produce such estimates in practice is an important open research direction. We assume that compromising a service requires the participation of an fraction of the overall validator stake supporting (e.g., ). With this expanded network formalism to capture multiple services, when should we consider a network to be “secure”?
Intuitively, a network is insecure if there is a set of services that can be profitably attacked. This requires two conditions to be satisfied: the validators carrying out the attack must control sufficient stake to do so (for each service in the attacked set , the validators of control at least a fraction of the staked pledged to ), and they must profit from it (i.e., ). We call such a pair a valid attack and a network secure if there are no valid attacks; see also Figure 3. With multiple services, security is an inherently combinatorial (as opposed to binary) notion. For example, the computational problem of checking whether a network is secure is as hard as the (coNP-hard) problem of verifying the expansion of a bipartite graph (see [12]).
When is a network “robustly secure,” in the sense that the sudden loss of a small amount of stake cannot enable a catastrophic attack? Unlike the “all-or-nothing” version of this question with a single service, with multiple services, the following more fine-grained question is the appropriate one: given an initial shock in the form of a sudden loss of a fraction of the overall stake, what is the total fraction of stake that might be lost following any consequent valid attacks?
As in the case of a single service, some amount of “buffer” in stake (relative to attack profits) is necessary for robust security. We parameterize this overcollateralization factor via a parameter and suppose that, whenever is a subset of validators capable of corrupting all the services in , the total stake of is at least a factor larger than the total profit from corrupting all of . For example, this condition holds in the network in Figure 3(b) for (but not for larger values of ).
Our first main result (Theorem 4) precisely characterizes the worst-case (over bipartite graphs and shocks, as a function of ) fraction of the overall stake that can be lost due to a shock of size : . Because the network was secure prior to the shock, the value of a fraction of the overall stake is also an upper bound on the total profit obtained from all of the attacked services.
We also show that our result is tight in a strong sense (Theorem 6 and Theorem 7): for every , , and greater than zero such that , there exists a restaking graph in which a fraction of the overall stake can disappear in a shock that results in the loss of at least a fraction of the overall stake.333After slightly reducing the validator stakes, the network in Figure 3(b) already shows that the bound is tight for the special case in which and is arbitrarily close to . (Consider a shock that knocks out the validator that is connected to all four services.)
Qualitatively, this result implies that a constant-factor strengthening of the obvious necessary condition for security automatically implies robust security. For example, if attack costs always exceed attack profits by 10%, then a sudden loss of .1% of the overall stake cannot result in the ultimate loss of more than 1.1% of the overall stake.
Our result suggests a “risk measure” that could be exposed to the participants in a restaking protocol, namely the maximum value of the buffer parameter that holds with respect to the current restaking network. We also suggest easy-to-check sufficient conditions that can proxy for this risk measure (Corollary 5). These conditions are similarly tight, as shown in Theorem 6 and Theorem 7.
1.4 Our Results: Local Robust Security Guarantees
The results described in Section 1.3 are “global” with respect to the network structure, in three distinct senses: (i) the overcollateralization condition is assumed to hold for every subset of validators and of services that is capable of corrupting; (ii) the initial shock can affect any subset of validators, subject to the assumed bound of on the total fraction of stake lost; and (iii) any subset of validators might lose stake following the initial shock, subject to our upper bound of on the total fraction of lost stake.
Local guarantees
We next pursue more general “local” guarantees, which are parameterized by a set of services. (The global guarantees will correspond to the special case in which .) For example, might be a set of closely related services that share a number of dedicated validators. The operators of such a set might object to both the assumptions and the conclusion of our global guarantee:
-
How can we be sure that the overcollateralization factor holds for services and validators that we know nothing about?
-
And even if we could, how can we be sure that random validators that we have nothing to do with won’t suddenly lose their stake (e.g., because they supported a malicious or buggy service), resulting in an initial shock the causes the loss of more than a fraction of the overall stake?
-
And even if we could, how can we be sure that our validators won’t be the ones that lose their stake following a shock that is purely the fault of other services and/or validators?
To address these concerns, we next consider a refined version of the basic model, parameterized by a set of services. We denote by the validators that are exclusive to , meaning that they contribute to no services outside . (In the special case in which , and we recover the original model.) Intuitively, the validators in are the ones that services in are “counting on”; other validators support (potentially malicious or buggy) services outside and, from ’s perspective, could disappear at any time. We then restrict attention to initial shocks in which at most a fraction of the total stake controlled by the validators of is lost. (The shock can affect validators outside of arbitrarily.) The goal is then to identify overcollateralization conditions guaranteeing that, no matter what the initial shock and subsequent attacks, the fraction of stake ultimately lost by the validators in is bounded (by a function of the shock size and an overcollateralization parameter ).
Generalizing our global guarantees to local guarantees is not straightforward and, as the next two examples show, requires additional compromises. The first example shows that protection can be guaranteed only against a subset of valid attacks, a natural and well motivated subset that we call “stable attacks.” The second example shows that, even when restricting attention to stable attacks, overcollateralization is required not only for potential attacks, but more generally for what we call “attack headers.”
Necessity of restricting to stable attacks
In more detail, in the first example (depicted on the left of Figure 4), we consider a restaking network with two services and three validators. (Vertices in this figure correspond to validators, which were previously represented as the right-hand side vertices of a bipartite graph; each service is now identified with its neighborhood of validators.) The service highlighted in green on the left has times as much stake securing it as its profit from corruption, and shares no validators with other services, so one might hope that it would be protected from any shocks that affect only the other validators. However, if the validator outlined in red disappears, the two validators outlined in yellow can attack both services for a profit of . But this example is unsatisfying: the validator with stake could have attacked the service with profit from corruption on its own to yield a profit of . The addition of the validator with stake added to the cost of the attack, but only added (from the corruption of the service highlighted in green) to the profit. We show in Theorem 8 that the issue suggested by this example is fundamental: without further restrictions on attacks that rule out contrived examples such as this one, there does not exist a local condition that guarantees local security. In response, we confine attention to sequences of what we call stable attacks in which all of the attacking validators contribute positively to the profit of the attack (as opposed to free riding on the profits attributable to other attacking validators). The attack in the example is not stable, as the validator with stake was not a profitable addition to the attack that could have been carried out by the validator with stake .
Necessity of overcollateralizing attack headers
Even if we restrict our attention to stable attacks, simple overcollateralization is insufficient to guarantee local security. To appreciate the issue, consider the restaking network shown on the right in Figure 4. In this example, there are three services, each with a profit from corruption of , and three services each with stake . Each validator is used to secure two different services. The service outlined in green is overcollateralized in that its profit from corruption is , but two units of stake stake are securing it. Despite this, if the unrelated validator outlined in red disappears, the two validators outlined in yellow can attack all three services for a profit of . Furthermore, if all validators are required to attack each service (i.e., ), then this attack is stable, as the inclusion of both validators in the attack is profitable. Thus, despite being overcollateralized, a stable attack can be launched on the service highlighted in green even if stake unrelated to the service disappears in a shock.
We find that to guarantee robust security for a coalition of services using only “local” information, it is necessary to overcollateralize not only pairs where is a set of validators capable of attacking all services in , but a more general collection of pairs that we call attack headers. Informally, this amounts to requiring that there is some “buffer” in stake for any potential attack on some services in even if we were to allow every validator that is also securing a service outside of to join the attack without considering their profitability. Our main result here (Theorem 10) formally provides a local condition guaranteeing that, whenever an initial shock knocks out at most a fraction of the stake that provides security exclusively to , the worst-case loss of such stake, after an arbitrary sequence of stable attacks, is at most a fraction. We show that our bounds are tight and indeed require overcollateralization of all attack headers (Corollary 14 and Theorem 15), and again provide easily computable sufficient (and similarly tight) conditions that can proxy for the overcollateralization condition (Corollary 13). Our local condition generalizes the global overcollateralization condition, with the latter corresponding to the special case of the former in which .
1.5 Our Results: Cascading Attacks
An attack on a restaking network results in a loss of stake (of the attacking validators), and this may introduce new opportunities for other sets of validators to carry out profitable attacks. That is, an initial shock may set off an entire cascade of attacks. (All of the bounds on stake loss described in Sections 1.3 and 1.4 hold for cascades of attacks of arbitrary length.) Our final result concerns the maximum-possible length of such a cascade, and shows that this quantity is also governed in part by the overcollateralization factor .
Precisely, we define the reference depth of a cascade of attacks as a measure of the “long-range dependence” between different attacks in an attack sequence. For example, if each attack in the sequence is directly enabled by the loss of the validators slashed in the previous attack, then the reference depth of the sequence is 1. Our main result here (Theorem 16) bounds the maximum-possible attack length as a function of the reference depth , the shock size , the overcollateralization factor , and the minimum stake held by a validator: . For example, in the case of constant reference depth and equal validator stake amounts, the worst-case attack length is logarithmic in the number of validators with overcollateralization and linear without it.
1.6 Related Work
Our focus on the risks of cascading failures following a small shock echoes some of themes in the literature on systemic risk in financial networks. For example, Eisenberg and Noe [9] that study the existence and structure of inter-firm payments in a financial network following a default. This work is extended by Glasserman and Young [11], who study how “connectedness,” meaning the fraction of liabilities that a firm externally owes, affects contagion risk. Acemoglu et al. [2] build further on this work and study network “stability,” meaning the propensity for shocks to propagate; they show that connectivity initially improves stability, but then at a phase transition, denser connectivity leads to increased shock propagation. In a subsequent paper, Acemoglu et al. [1] unify a number of the preceding results.
A separate line of work, beginning with Chen et al. [6], aims to axiomatically characterize systemic risk measures. The model in [6] can capture, in particular, a contagion model characterized by a matrix of profits and losses over different firms and outcomes. The results in [6] characterize the global measures of risk (operating on the matrix) that satisfies certain sets of desirable axioms. This is work is expanded upon in Kromer et al. [13], where the authors consider general outcome measure spaces, as well as in Feinstein et al. [10], where the authors consider set-valued risk measures. Battison et al.[4] study systemic risk measurement when a regulator has limited information about contracts made between financial network participants (e.g., the fraction of the face value that can likely be recovered if a counterparty defaults), and show how small errors in knowledge can lead to large errors in systemic risk measurement.
Our work also shares some conceptual similarity with the well-known work of Diamond and Dybvig [8] on bank runs and of Brunnermeier et al. [5] on the risks of re-hypothecation.
The model in the present work differs substantially from those considered in the aforementioned papers, in large part because of the idiosyncrasies of restaking networks, including their combinatorial and bipartite nature and their susceptibility to economically motivated attacks.
Restaking has also, to a limited extent, been studied in its own right. The EigenLayer team introduces restaking in [15]. The framing of an economically motivated attacker that trades off stake loss with profits from corruption appears in [15], and is used also by Deb et al. [7]. Chitra and Neuder [14] discuss restaking risk from a validator perspective, comparing restaking with investments in other financial instruments (e.g. bonds). Alexander [3] considers the interplay between existing leveraging schemes and liquid restaking tokens, which can amplify large-scale credit risk.
2 Model
Validators and Services
We consider a setting in which there is a set of validators and a set of services. Each service has some profit from corruption , and each validator has some stake . We also associate with each service a parameter that denotes the fraction of stake required to corrupt/launch an attack on . We call a bipartite graph a restaking graph; an edge is drawn between a validator and a service if is restaking for . For a given set of vertices in a graph , we use the notation to denote the neighbors of .
Attack Dynamics
For simplicity, we assume that validators lose their full stake if they launch an attack on a service. As such, for a given collection of services , and a given collection of validators restaking for those services, we say that is an attacking coalition for a restaking graph if the validators in possess enough stake to corrupt the services :
(1) |
We further say that is a valid attack if it is an attacking coalition that has an incentive to launch an attack:
(2) |
If a valid attack is carried out, we denote by the induced subgraph . The graph denotes the state of the restaking graph after the attack is carried out. If no valid attacks exist on the graph, we call it secure. To simplify notation, for any subset of validators , we will use the shorthand to denote . Similarly, for any , we will use as shorthand for .
EigenLayer sufficient conditions
We note in passing that, in their whitepaper [15], EigenLayer proposes some efficiently verifiable sufficient conditions for security that they check to ensure that an attack does not exist.
Claim 1 (EigenLayer Sufficient Conditions, from Appendix B.1 of the EigenLayer Whitepaper [15]).
A restaking graph is secure if for each validator ,
(3) |
Proof.
It is shown in Appendix B.1 of the whitepaper that if Equation 3 holds for , then the graph is secure (i.e. no valid attacks satisfying Equations 1 and 2 exist).
Cascading attacks
Our goal is to understand when a small shock can result in the loss of a large fraction of the overall stake. Small shocks can turn into large shocks by means of a cascading attack. Formally, we say that a disjoint sequence is a valid cascade of attacks on a restaking graph if for each , is a valid attack on . We denote by the set of all such sequences of valid cascading attacks.
Worst-case stake loss
We now define a metric that measures the total potential loss of stake due to a sequence of cascading attacks. In our model, we first suppose that an initial small shock decreases the amount of stake. Formally, we define, for a given restaking graph ,
(4) |
to be the set of validator coalitions that constitute at most a -fraction of all stake. Given some , we use the notation to denote the induced subgraph of the restaking graph when we delete the validators . We now define
(5) |
This quantity represents the worst-case total fraction of stake lost due to an initial -fraction of the stake disappearing. By construction, .
3 Overcollateralization Provides Robust Security
In this section, we show that “scaling up” the definition of security automatically results in robust security, meaning bounded losses from cascading attacks that follow an initial shock. We first show that, without loss of generality, it suffices to consider single valid attacks instead of more general cascading attacks.
Lemma 2.
Let be an arbitrary restaking graph, and further suppose that is an attacking coalition on , where . Then, is an attacking coalition on .
Proof.
Because is an attacking coalition on , we must have by Equation 1 that
(6) |
It follows that for any ,
(7) | ||||
(8) | ||||
(9) |
and the desired result follows.
Corollary 3.
Let be an arbitrary restaking graph, and further suppose that is a valid sequence of cascading attacks on . Then, is also a valid attack on .
Proof.
By repeatedly applying Lemma 2, we find that for each , is an attacking coalition on . It follows by inspection of Equation 1 that we must therefore have that is an attacking coalition on . To finish the result, it suffices to show that Equation 2 holds for this attacking coalition on the original graph . This follows from the disjointness of the ’s and of the ’s:
(10) |
where in the inner inequality we use that for each , by Equation 2, as is a valid attack on . It follows that is a valid attack on .
Adding Multiplicative Slack
Our condition is given by adding multiplicative slack to Equation 2. Formally, we say that a restaking graph is secure with -slack if for all attacking coalitions on ,
(11) |
Theorem 4.
Suppose that a restaking graph is secure with -slack for some . Then, for any , .
Proof.
Take any and any for some restaking graph where (11) holds. Let be arbitrary. Applying Corollary 3, we must have that as well. Defining and , we must therefore have that is an attacking coalition on , and furthermore that
(12) |
By Lemma 2, we must also have that is an attacking coalition on the original graph . It then follows that as is secure with -slack, Equation 11 must hold on , whence
(13) |
Putting this together with Equation 12, we find that
(14) |
It follows that
(15) | ||||
(16) |
As we took to be arbitrary, we find that , as desired.
The EigenLayer sufficient conditions (3) can be similarly “scaled up” to yield efficiently checkable sufficient conditions for security with -slack (and hence, by Theorem 4, robustness to cascading attacks).
Corollary 5.
Let be a restaking graph such that, for all validators ,
(17) |
Then, .
Proof.
This follows from Theorem 4 and 1. Noting that the -slack condition holds precisely iff no valid attacks exist when profits from corruption are inflated by a multiplicative factor of , it suffices to apply EigenLayer’s sufficient conditions from 1 with modified profits from corruption . Note that, given a restaking network, it is straightforward to compute the minimum value of such that the condition in (17) holds. This value can then be interpreted as an easily computed “risk measure” of such a network.
4 Lower Bounds for Global Security
In this section, we show that the upper bounds from the previous section are tight. We first show that if there is no multiplicative slack (i.e. ), then very small shocks can cause all stake to be lost in the worst case. This holds even under the EigenLayer conditions (3)444The graph exhibited in the proof of Theorem 6 has as . It is possible to construct a counterexample with similar properties while maintaining that is greater than some universal constant for any and . This is done in the full version of the paper..
Theorem 6.
For any , there exists a restaking graph that is secure and meets the EigenLayer condition (3), but has for all .
Proof.
We construct a restaking graph with one service and two validators , where an edge exists between each validator and the service (i.e. ). We then let , , , and . Without loss of generality, we may assume as for any restaking graph . This graph satisfies (3) as
(18) | |||
(19) |
whence the graph is also secure by 1. We now consider an initial shock . As , it follows that . The pair is a valid attack on , since whence it is an attacking coalition, and . It follows that as desired.
Next, we show that the bound we give in Theorem 4 (indeed, more strongly, the condition given in Corollary 5) is tight for all .
Theorem 7.
For any such that
(20) |
there exists a restaking graph that satisfies the condition (17) from Corollary 5 but has .
Proof.
We construct a restaking graph with three validators and one service , where an edge exists between each of the validators and and the service (i.e. the edge set ). Without loss of generality, suppose that . Let be any positive constant. We define
(21) | ||||
(22) | ||||
(23) | ||||
(24) |
Notice first that as we have taken . Next, observe that
(25) | ||||
(26) |
whence by Equation 20. Finally, we must also have that by Equation 20 as well. Next, notice by construction that
(27) |
This graph meets condition (17) from Corollary 5, as
(28) | ||||
(29) | ||||
(30) |
A similar argument shows that
(31) |
Finally, as the validator has no neighbors, it also satisfies the condition, whence the graph indeed satisfies (17). We now consider an initial shock to the graph. Because
(32) |
the shock constitutes a fraction of the total stake as desired. The attack is a valid attack on . To see this, notice first that is an attacking coalition on as . Furthermore, is incentivized to attack since
(33) | ||||
(34) | ||||
(35) | ||||
(36) |
whence Equation 2 is satisfied for the pair . It follows that
(37) |
as desired.
5 Local Security
The bound in Theorem 4 on the worst-possible stake loss from cascading attacks is reassuring from a global perspective, but less so from the perspective of one or a small number of services who would like an assurance that they will not be among those affected by such attacks. Beginning with this section, we focus on a specific coalition of services that seeks to insulate their shared security against shocks and resulting cascading attacks that may come about due to the decisions of other services and validators. Formally, we denote by
(38) |
the set of validators that exclusively provide security for services in .
Worst-case stake loss (local version)
As before, we first suppose that an initial shock affects the restaking graph. Whereas previously, we considered shocks for which the total stake in the shock was bounded, we now consider shocks for which the total stake that impacts the exclusive security of some coalition of services (i.e., stake that secures services from and only services from ) is bounded. Formally, for any coalition of services within some restaking graph , we let
(39) |
denote the set of all validator coalitions that provide at most stake to the aggregate security of the coalition of services . Notice that shocks may have much more total stake than a fraction of the graph. We are instead only guaranteed that the impact of the shock on stake that is being used exclusively for members in is small. We are now interested in the potential cascading losses that can affect the stake that is exclusively utilized by the coalition after a shock occurs that destroys at most stake from the aggregate security of . Formally, we study the quantity
(40) |
Local security conditions
We seek sufficient conditions on a restaking graph that guarantee a nontrivial upper bound on . Ideally, the sufficient condition would depend only the neighborhood/choices of the coalition to provide a guarantee that holds regardless of the choices made by other validators and services (i.e., the services of can attempt to “control their own destiny” by ensuring that the locally defined sufficient condition holds). Formally, given a restaking graph and a coalition of services , we call a restaking graph a -local variant of if cannot distinguish from on the basis of local information: , , and
(41) | |||||
(42) |
We then define a local security condition to be a Boolean function that takes as input a restaking graph and a coalition of services such that must be equal to for all -local variants . The intuition behind this definition is that the condition should only depend on service-level information (e.g. profits from corruption, security thresholds) for services in the coalition, and validator-level information for validators in .
Local security impossibility
Unfortunately, without further restrictions on the attacks under consideration (like those defined later in this section), it is impossible to construct any nontrivial local security condition that yields any nontrivial upper bound on .
Theorem 8.
For any local security condition , any secure restaking graph and coalition of services such that , there exists a secure -local variant of such that .
Proof.
Take any , , and secure such that . Define
(43) |
to be the total overcollateralization of in aggregate. Next, we define to be an augmented version of , where we add a new service that has a profit from corruption where . We further add validators and to the graph who are adjacent only to . We let and . As , the graph must be secure as was secure. Next, notice that as the validator is not path-connected to , must be a -local variant of . However, by construction, the attack is valid on the graph . It follows that .
Stable attacks
While the above impossibility appears to be quite strong, it is somewhat contrived. At the heart of the impossibility is that under the definition of a valid attack (i.e. Equations 2 and 1), not every validator must be productive in carrying out the attack. There may be a subset of validators in the attack that can yield more net profit than the full coalition. In what follows, we show that if we assume that malicious validator coalitions will choose to add others to their ranks only if it is profitable for them in net to do so, then a local security condition with guarantees similar to those in Theorem 4 does indeed exist. Formally, for and , we say that an attack is stable if it is valid (i.e. Equations 2 and 1 hold), and for all and such that is valid,
(44) |
Intuitively, if (44) did not hold, then the validators of would be better off ditching those in and attacking only the services in .
We say that a disjoint sequence is a cascade of stable attacks on a restaking graph if for each , is a stable attack on . We denote by the set of all such sequences of cascading stable attacks. In light of Theorem 8, we redefine our notion of worst-case stake loss using stable attacks:
(45) |
Unlike valid attacks, unions of sequences of stable cascading attacks need not be stable (which will complicate the proof of Theorem 10 in the next section).
Claim 9.
There exists a restaking graph and a sequence such that .
Proof.
Consider restaking graph where there are two services and two validators where both validators are restaking in both services. Furthermore, , , and . In this case, the sequence . However, because the attack is a valid attack.
6 A Local Security Condition for Stable Attacks
In this section, we give a family of local security conditions that yield guarantees on the local worst-case stake loss that resemble our result from Theorem 4 for global security. In Theorems 4 and 7, we showed that security with -slack was necessary and sufficient in order to obtain a upper-bound on . In other words, it was both necessary and sufficient to ensure that all attacking coalitions were overcollateralized multiplicatively by a factor of . Our condition for local security is similar: we must ensure that certain attack headers are overcollateralized multiplicatively by a factor of .
Attack header
Formally, we say that is an attack header, for and , if there exists a set of validators satisfying
(46) |
such that is an attacking coalition. In other words, is an attack header if can be appended to a collection of validators that may be slashed without attacking services in to form an attacking coalition that attacks the services . An attacking coalition is a special case of an attack header (in which can be taken as ).
Theorem 10.
Let be a restaking graph and be a coalition of services. If, for all attack headers where ,
(47) |
then . Furthermore, the Boolean function that checks whether Equation 47 holds for all attack headers is a local security condition.
Proof.
Let and be arbitrary. For each , define
(48) |
to be the set of all validators exclusively securing that were lost in the attack. Next, define
(49) |
to be the maximal set of services such that is an attacking coalition on . Notice also that because is a stable attack on , we must have that is nonempty, and furthermore must be a subset of .
Claim 11.
(50) |
Proof.
From stability, we have that
(51) |
To see why this holds, notice that there are two cases. If the attacking coalition is a valid attack, then the inequality follows directly from the stability definition. If instead is not a valid attack despite being an attacking coalition, the inequality must still hold because the original attack is valid and therefore satisfies Equation 2. Iterating over , we find that
(52) |
Claim 12.
is an attack header on , and .
Proof.
Because each , we must also have that as well. Next, by applying Corollary 3 noting the disjointness of the , we have that
(53) |
must be an attacking coalition on . In particular, we must have that is an attacking coalition on , whence by Lemma 3 we have that is an attacking coalition on . Rewriting the above as
(54) |
and noting by construction that
(55) |
we find that is an attack header on as desired. Putting these claims together yields the desired result. By Claim 12 and Equation 47, we find that
(56) |
Adding in Claim 11, we find that
(57) | ||||
(58) | ||||
(59) |
as desired. To see that the Boolean function that checks whether Equation 47 holds for all attack headers is a local security condition, observe that it suffices to check that for all and such that is an attacking coalition, Equation 47 holds. Thus, for any restaking graph and -local variant , this function will evaluate to the same output.
The condition in (47) can be checked via enumeration, although the time required to do so grows exponentially in the number of services in and the number of validators that contribute security exclusively to services in . In the event that this is a prohibitive amount of computation, the same guarantee holds under a stronger, easily checked local analog of the EigenLayer sufficient condition (3) which, in effect, treats as malicious all validators that contribute security to any services outside of .
Corollary 13.
For any restaking graph and coalition of services , satisfaction of the local condition
(60) |
for all with guarantees that , where for each ,
(61) |
Proof.
This follows from Theorem 10 and 1. Notice that to guarantee that the condition (47) from Theorem 10 holds for all attack headers , it suffices to guarantee that no valid attacks exist, when (i) all profits from corruption for services in are inflated by a multiplicative factor of , and (ii) the fraction of stake required to corrupt a given service is offset by the fraction of stake in that service that is also restaking for services that do not belong to . As 1 provides a sufficient condition to guarantee the existence of no valid attacks, Equation 60 will guarantee that all attack headers are overcollateralized by a multiplicative factor of .
7 Lower Bounds for Local Security
We next show senses in which Theorem 10 (and more strongly, Corollary 13) is tight.
Corollary 14.
For any such that
(62) |
there exists a restaking graph that satisfies the condition (60) from Corollary 13 but has .
Proof.
Notice that if we take , the condition (60) from Corollary 13 is identical to the condition (17) from Corollary 5. Furthermore, is the same as except for the fact that considers only stable attacks. Repeating the argument from Theorem 7, and noting that the attack given in the proof of that result is stable, we obtain the desired result.
Theorem 15.
For any , there exists a graph restaking graph that satisfies (17) from Corollary 5, but there exists a such that .
Proof.
Let there be three services , three validators , and edges as follows:
(63) |
Next, let for all , and let be an arbitrary positive constant. Next pick such that
(64) |
let , and let . This graph satisfies (17) from Corollary 5 as
(65) | |||
(66) | |||
(67) |
Next, let , and observe that the shock is an element of . However, because , we must have that is a stable attack on . As , it follows that .
8 Long Cascades
Cascade Structure
Although Corollary 3 shows that all long cascades can be made into a short, one-step attack, it is arguably more dangerous if large attacks can be made through long cascades of small attacks that each require less coordination. In this section, we show how adding -slack also enables us to upper-bound the length of a cascade in the worst case. Our results depend on what we call the reference depth of a cascading sequence of attacks. Formally, for a given restaking graph and coalition of validators , we say that a valid cascade of attacks on has reference depth if
(68) |
In other words, a cascading sequence has reference depth if the services attacked in a given time step are not affected by validators that were slashed more than steps previous.
Theorem 16.
Suppose that a restaking graph is secure with -slack for some . Let denote the minimum stake held by a validator. Then, for any , , and with reference depth ,
(69) |
Proof.
Because has reference depth , it follows that if we define
(70) | ||||
(71) | ||||
(72) |
for , we must have that for all ,
(73) |
Furthermore, by Corollary 3, we must have that the sequence . Applying Corollary 3 again, we further have that for every , is a valid attack on . It follows by Equation 2 that for any ,
(74) |
Next, noting that
(75) |
by Equation 73, it follows that indeed is also a valid attack on . Applying Lemma 2, we then find that is an attacking coalition on the original graph . Because is secure with -slack, Equation 11 must now hold on this pair, whence for all ,
(76) |
Putting this together with Equation 74, we have that for all ,
(77) | |||
(78) |
It can be shown inductively from Equation 78 that the sequence , where we define the sequence by
(79) | |||||
(80) |
Solving, we find that
(81) |
It then follows as desired that
(82) |
References
- [1] Daron Acemoglu, Asuman Ozdaglar, and Alireza Tahbaz-Salehi. Networks, shocks, and systemic risk. Technical report, National Bureau of Economic Research, 2015.
- [2] Daron Acemoglu, Asuman Ozdaglar, and Alireza Tahbaz-Salehi. Systemic risk and stability in financial networks. American Economic Review, 105(2):564–608, 2015.
- [3] Carol Alexander. Leveraged restaking of leveraged staking: What are the risks? Available at SSRN 4840805, 2024.
- [4] Stefano Battiston, Guido Caldarelli, Robert M May, Tarik Roukny, and Joseph E Stiglitz. The price of complexity in financial networks. Proceedings of the National Academy of Sciences, 113(36):10031–10036, 2016.
- [5] Markus K Brunnermeier, Gary Gorton, and Arvind Krishnamurthy. Risk topography. Nber macroeconomics annual, 26(1):149–176, 2012.
- [6] Chen Chen, Garud Iyengar, and Ciamac C Moallemi. An axiomatic approach to systemic risk. Management Science, 59(6):1373–1388, 2013. doi:10.1287/MNSC.1120.1631.
- [7] Soubhik Deb, Robert Raynor, and Sreeram Kannan. Stakesure: Proof of stake mechanisms with strong cryptoeconomic safety. arXiv preprint, 2024. doi:10.48550/arXiv.2401.05797.
- [8] Douglas W Diamond and Philip H Dybvig. Bank runs, deposit insurance, and liquidity. Journal of political economy, 91(3):401–419, 1983.
- [9] Larry Eisenberg and Thomas H Noe. Systemic risk in financial systems. Management Science, 47(2):236–249, 2001. doi:10.1287/MNSC.47.2.236.9835.
- [10] Zachary Feinstein, Birgit Rudloff, and Stefan Weber. Measures of systemic risk. SIAM Journal on Financial Mathematics, 8(1):672–708, 2017. doi:10.1137/16M1066087.
- [11] Paul Glasserman and H Peyton Young. Contagion in financial networks. Journal of Economic Literature, 54(3):779–831, 2016.
- [12] Subhash Khot and Rishi Saket. Hardness of bipartite expansion. In 24th Annual European Symposium on Algorithms (ESA 2016). Schloss-Dagstuhl – Leibniz Zentrum für Informatik, 2016. doi:10.4230/LIPIcs.ESA.2016.55.
- [13] Eduard Kromer, Ludger Overbeck, and Katrin Zilch. Systemic risk measures on general measurable spaces. Mathematical Methods of Operations Research, 84:323–357, 2016. doi:10.1007/S00186-016-0545-1.
- [14] Tarun Chitra Mike Neuder. The risks of lrts, 2024. URL: https://ethresear.ch/t/the-risks-of-lrts/18799.
- [15] EigenLayer Team. Eigenlayer: The restaking collective, 2023. URL: https://docs.eigenlayer.xyz/overview/whitepaper.