Accumulation Without Homomorphism
Abstract
Accumulation schemes are a simple yet powerful primitive that enable highly efficient constructions of incrementally verifiable computation (IVC). Unfortunately, all prior accumulation schemes rely on homomorphic vector commitments whose security is based on public-key assumptions. It is an interesting open question to construct efficient accumulation schemes that avoid the need for such assumptions.
In this paper, we answer this question affirmatively by constructing an accumulation scheme from non-homomorphic vector commitments which can be realized from solely symmetric-key assumptions (e.g., Merkle trees). We overcome the need for homomorphisms by instead performing spot-checks over error-correcting encodings of the committed vectors.
Unlike prior accumulation schemes, our scheme only supports a bounded number of accumulation steps. We show that such bounded-depth accumulation still suffices to construct proof-carrying data (a generalization of IVC). We also demonstrate several optimizations to our PCD construction which greatly improve concrete efficiency.
Keywords and phrases:
Proof-carrying data, incrementally verifiable computation, accumulation schemesFunding:
Benedikt Bünz: Supported by Alpen Labs, Chaincode Labs, and the Sui Foundation.Copyright and License:
![[Uncaptioned image]](x1.png)
2012 ACM Subject Classification:
Theory of computation Cryptographic protocolsEditor:
Raghu MekaSeries and Publisher:

1 Introduction
Proof-carrying data (PCD) [27] is a powerful cryptographic primitive that enables mutually distrustful parties to perform distributed computations that run indefinitely, while ensuring that the correctness of every intermediate step can be verified efficiently. PCD is a generalization of the prior notion of incrementally verifiable computation (IVC) [52].111IVC is the special case of PCD where the distributed computation graph is a line.
PCD has found numerous applications in both theory and practice, including enforcing language semantics [29], complexity-preserving succinct arguments [10, 9], verifiable MapReduce computations [28], image provenance [42], and consensus protocols and blockchains [44, 38, 14, 24, 19]. It is thus a key question to understand security assumptions that PCD constructions require, as well as the efficiency that they can attain under these assumptions.
Let us review how existing PCD constructions fare along both dimensions. For simplicity, we focus on the special case of IVC in the following discussion.
PCD from succinctly verifiable arguments
The standard construction of PCD is via recursive composition of succinct non-interactive arguments of knowledge (SNARKs) [10, 8, 9, 26]. Informally, to prove a -step computation, the PCD prover proves that the -th step is correct, and there exists a valid proof for the first steps. The state-of-the-art works in this line follow the template of Fractal [26] by relying on SNARKs in the random oracle model.222The concrete PCD construction makes non-black-box use of the SNARK verifier, which requires us to heuristically instantiate the random oracle. This means that we can achieve PCD (heuristically) from only symmetric-key cryptography, but at the cost of relying on the existence of SNARKs, which are complex to construct and incur (asymptotically and concretely) high costs for the PCD prover.
PCD from accumulation
A recent popular approach to avoid this reliance on SNARKs is to construct PCD via accumulation schemes [22, 21, 41]. Roughly speaking, instead of recursively checking proofs as above, the PCD prover “accumulates” the proof for each step into a running accumulator, and then the PCD verifier performs a single expensive check on the final accumulator. This line of work has led to simple and efficient PCD schemes that incur low costs for the PCD prover, and so has seen much interest in new constructions and deployments [21, 41, 20, 34, 39, 40]. Unfortunately, all known accumulation schemes rely on homomorphic vector commitments that are only known to exist under public-key assumptions. Additionally, because most existing homomorphic vector commitments are not known to achieve post-quantum security, the resulting accumulation schemes are also quantum-insecure.
Our question
We are thus left in an unsatisfactory state of affairs: on the one hand we have PCD from ROM-based SNARKs that (heuristically) relies on the minimal symmetric-key assumptions, but incurs high PCD prover costs due to this reliance on SNARKs, while on the other hand we have PCD from accumulation that relies on public-key assumptions, but achieves comparatively lower PCD prover costs by avoiding SNARKs. This motivates the questions we tackle in this paper: can we design PCD that achieves low prover costs by avoiding SNARKs, while simultaneously minimizing assumptions?
1.1 Our contributions
We answer this question positively by constructing accumulation schemes solely in the random-oracle model. Our constructions satisfy a weaker notion of accumulation than that considered in prior work, but we show that this weaker notion still suffices to construct PCD (again, heuristically after instantiating the random oracle). We provide details below.
(1) Bounded-depth accumulation
We introduce a new notion of bounded-depth accumulation schemes that can only provide (knowledge) soundness guarantees for a limited number of consecutive accumulation steps. In contrast, prior accumulation schemes support an unbounded number of accumulation steps.
(2) PCD from bounded-depth accumulation
We show that this weaker notion of accumulation still suffices to construct (a variant of) proof-carrying data. In particular, we show that bounded-depth accumulation suffices to construct PCD for computation graphs of a-priori bounded depth , which is already sufficient for many applications, including the primary application of constructing polynomial-length IVC [10].
We note that, like our construction, all prior PCD constructions only provably support computation graphs of bounded depth [10]. However, unlike these prior works, our construction is vulnerable to attacks when the depth of the computation graph exceeds an a priori fixed constant. See Remarks 2 and 3 for details.
(3) Efficient constructions of bounded-depth accumulation
We construct efficient bounded-depth accumulation schemes from any (non-homomorphic) vector commitment scheme (e.g., random-oracle based Merkle trees) and any linear code.
Compared to the prior state-of-the-art accumulation schemes, our construction has several advantages beyond just avoiding public-key assumptions, such as true linear time for the accumulation prover,333 To accumulate proofs for circuits of size , prior accumulation schemes require performing an -sized multi-scalar multiplication over elliptic-curve groups. As explained in prior work [37], this operation requires group operations, and is hence not truly linear time. and plausible post-quantum security.444We only claim plausible post-quantum security, as we prove our construction in the random oracle model, instead of the quantum random oracle model [12]. We believe that our results can be extended to the latter model by applying techniques from prior work [25], but leave this to future work. (We note that both our construction and prior accumulation schemes rely on the random oracle model; it is an open question to construct an accumulation scheme in the standard model.)
(4) Efficiency of and optimizations for instantiated PCD
Instantiating our generic PCD construction with our accumulation scheme leads to PCD schemes with numerous prover efficiency benefits compared to prior work. We also provide several optimizations for this scheme, including support for “batch” accumulation, a new low-overhead compiler from low-depth PCD to IVC, and a new hybrid PCD scheme that combines our low-depth PCD with any SNARK-based PCD scheme to achieve the best of both worlds. We detail the impact of these optimizations in Table 1.
Our scheme also has prover efficiency benefits beyond those captured by Table 1. To elaborate on these, we briefly detail the key factors that contribute to PCD prover cost. In all practical PCD constructions, to produce a proof for the next step of the distributed computation, the PCD prover invokes an underlying cryptographic proof system to prove satisfaction of a circuit that contains a circuit representation of the computation step (or PCD predicate ), and also performs other checks required by the PCD construction. Thus, PCD prover efficiency is determined by the prover efficiency of this proof system, and by the cost of the additional checks. We call the former the proof-system overhead, and the latter the PCD circuit overhead.
PCD constructions based on ROM-based SNARKs have been able to achieve best-in-class proof-system overhead via extensive asymptotic and concrete optimizations (e.g., by achieving truly linear-time provers [15, 16, 17, 37], or relying on small fields [31]), but suffer from high PCD circuit overhead as the SNARK verifier subcircuit must perform numerous expensive random oracle calls.
Accumulation-based PCD constructions, on the other hand, have the smallest PCD circuit overhead (e.g., just ten thousand gates for Nova [41] vs. over one million gates for Fractal [26]). They also try to lower proof-system overhead by avoiding some cryptographic work (e.g., by requiring commitments only to the witness [20]), and via efficient arithmetizations of predicates (e.g., via cheap custom gates555Custom gates enforce checks beyond simple addition and multiplication. Examples include high-degree polynomial gates [35] and lookup gates [15, 36]. See [23, 30, 3] for empirical evidence of the benefits of such gates.). However, the remaining cryptographic work is quite expensive as it requires public-key operations, and hence the proof-system overhead of these constructions is relatively high.
Our construction of bounded-depth accumulation achieves the best of both worlds: it enjoys the low PCD circuit overhead of accumulation-based PCD constructions while taking advantage of the proof-system optimizations enjoyed by both approaches. In fact, the combination opens up new efficiency improvements that are not possible in either of the two paradigms alone. For example, all known ROM-SNARK-based PCD constructions thus far have relied on Reed–Solomon codes, and incur quasilinear prover costs from the quasilinear encoding time of these codes. Our construction, on the other hand, allows the use of any linear code, including linear-time-encodable codes [50, 33, 37], thus reducing prover costs.
scheme | circuit overhead per step | IVC verifier |
---|---|---|
Fractal [26] | ||
+ STIR [2] (concurrent) | ||
this paper | ||
+ batch comm. (Sec 2.5) | ||
+ low-overhead IVC (Sec 2.6) | ||
+ hybrid (Sec 2.7) |
1.2 Related work
PCD from symmetric-key assumptions
As noted in Section 1, the only end-to-end construction of PCD from symmetric-key assumptions is that of Chiesa, Ojha, and Spooner [26]. We provide a quantitative comparison in Table 1, and focus here on a qualitative comparison. Their construction is based on the Fractal SNARK, which they prove secure in the random oracle model.666Like us, their PCD construction must instantiate the random oracle with a concrete hash function, and can hence only achieve heuristic security.
Boneh, Drake, Fisch, and Gabizon [13] propose an optimization of the foregoing approach that batches the most expensive component, the low-degree test, across multiple proofs. While this concretely reduces prover cost, it does not lead to an asymptotic improvement in the prover overhead.
Like Fractal and similar SNARKs, our construction is able to take advantage of recent advances in the design of efficient code-based Interactive Oracle Proofs (IOPs) [7, 48]. For example, like recent work [51, 46, 31], we can greatly improve efficiency by relying on extension fields of small characteristic. Furthermore, unlike existing works, our fields do not need to have any special algebraic structure (e.g., large multiplicative subgroups).
PCD from public-key assumptions
Except the foregoing, all existing concretely-efficient IVC/PCD constructions [9, 18, 22, 21, 13, 41, 39, 20, 34, 40] rely on public-key assumptions, and in particular rely on the hardness of computing discrete logarithms over elliptic curve groups, which forces the usage of cryptographically large fields. Furthermore, efficient implementations require cycles of elliptic curves, which have proved unwieldy to implement correctly in practice [43]. In comparison, our construction avoids the need for public-key assumptions and this additional algebraic structure, and is able to use non-cryptographic field sizes.
Concurrent work
The concurrent work LatticeFold [11] takes a complementary approach to constructing plausibly post-quantum accumulation-based PCD: it constructs an accumulation/folding scheme from lattice-based assumptions. Unlike our construction, their accumulation scheme directly supports unbounded depth, allowing it to serve as a drop-in replacement in many existing constructions of accumulation-based PCD. However, this comes at a cost: at each accumulation step, the prover must demonstrate that the accumulation result has a small norm, which incurs a non-trivial computational cost. Furthermore, the dependence on lattice-based assumptions means that their scheme still relies on public-key assumptions.
An interesting direction for future work would be to combine the two approaches to reduce the need for small-norm checks. For example, one could construct a more efficient PCD scheme by first designing a bounded-depth accumulation scheme that relies on the bounded homomorphism supported by lattice commitments. One could then use the transformation of Section 2.7 to combine the resulting (bounded-depth) PCD scheme with the (unbounded-depth) LatticeFold PCD scheme to achieve a more efficient construction than either scheme alone.
Another concurrent work, STIR [2], provides a new proximity test that can be used as a drop-in replacement for the proximity test [5] used in Fractal [26]. As noted in Table 1, our schemes still have lower recursion overhead than this optimized baseline. Furthermore, this new STIR + Fractal-based PCD should also be compatible with our hybrid construction from Section 2.7.
Remark 1 (security of bounded-depth PCD).
All PCD schemes (including ours) only provably support computation graphs of depth . However, while there are no known attacks that break the security of prior schemes when the depth is , the same is not true for our scheme. As we explain in Section 2.1, our scheme is vulnerable to a relatively straightforward attack that obviates any security guarantees when the depth of the computation graph exceeds an a priori fixed constant. We emphasize that even such bounded-depth PCD is already powerful enough to support many interesting applications, including the primary application of constructing polynomial-length IVC [10]. See Remarks 2 and 3 for a more detailed discussion.
2 Techniques
We begin by reviewing the definition of an accumulation scheme [22, 21].777We restrict our presentation to split accumulation schemes, as defined in [21]. At a high level, it is used to perform batch verification of a predicate which, for us, will be a non-interactive argument’s verifier . In other words, an accumulation scheme is used to check that all accept, more efficiently than the naive approach of individually verifying each instance and proof .
The workflow of an accumulation scheme is as follows. There are three main algorithms: a prover , verifier , and decider . The prover is initialized with an empty accumulator , which is used to accumulate an input into a new accumulator . The prover additionally outputs a proof; we write this as . Later, can be used to accumulate a second input, i.e., , and so on. The correctness of a sequence of accumulations can then be established by checking that: (a) each accumulation step is valid, i.e., ; and (b) the final accumulator is valid, i.e., .
Notice that the decider only acts on the final accumulator, whereas the verifier acts on each accumulation step. Therefore, the crux of an accumulation scheme is making verification as cheap as possible. Towards this, we require that an accumulator can be split into a short instance part and a (possibly) long witness part ; we use as shorthand. Similarly, we require that an argument proof can be split into instance and witness parts . The point is that the verifier can only look at the instance parts; we write this as .
Definition
An accumulation scheme must satisfy the following properties.
-
Completeness: The honest accumulation of any valid input and accumulator should pass both the verifier’s and decider’s checks. That is, if and , then and .
-
Knowledge soundness: If a new accumulator passes the verifier’s and decider’s checks, then an efficient extractor can find a valid input and old accumulator that explains . That is, if and , then an efficient extractor can find the witness part of the proof, , and the witness part of the old accumulator, , such that and .
-
Efficiency: The cost of running the accumulation verifier times plus the cost of running the accumulation decider once should be lower than the cost of running the argument verifier times.
Accumulation schemes can be generalized to handle multiple inputs and accumulators in each step. For example, the prover’s syntax would be , where and are the arities; see Section 3 for a comprehensive definition.
Prior constructions
All prior accumulation schemes [22, 21, 41, 20, 34, 39, 40] crucially use additively homomorphic vector commitment schemes. Informally, a vector commitment scheme allows one to construct a succinct commitment to a vector . The scheme is additively homomorphic if, given and , is a commitment to . We remark that all known additively homomorphic vector commitment schemes, e.g., Pedersen commitments [45], rely on public-key assumptions.
The general blueprint for an accumulation scheme is as follows. An accumulator witness is a vector, and the corresponding instance is a commitment to . For simplicity, suppose the prover claims that and accumulate into . Roughly speaking, we want to guarantee that the output accumulator is a random linear combination of the input accumulators. The verifier checks this by computing the linear combination of the input commitments and , and checking that the result equals the output commitment [41, 20]. Later, the decider will check that is a “good” vector, and that commits to . Since commitments are binding, must be the correct linear combination of and .
We have omitted many details, most notably how to accumulate argument proofs. However, from this description alone we can observe two key properties of the vector commitment scheme. First, it has succinct commitments; this allows the verifier to be efficient. Second, it is additively homomorphic; this allows the verifier to perform meaningful checks. As noted earlier, this combination of properties unfortunately seems to require public-key assumptions.
2.1 Checking linearity
To overcome the foregoing limitation, we make a key observation: to verify that the output accumulator is a linear combination of the input accumulators, it is not necessary to directly compute a linear combination of the input commitments. Instead, it suffices to check that the output accumulator commits to a vector that is a linear combination of the vectors committed by the input accumulators. This idea is a natural one, and has appeared before under the name of “linear combination schemes” [13].
Recall that we want to check that , where are previously chosen scalars. More precisely, we want to check that , where , , and are the underlying committed vectors of , , and (and since commitments are binding, these vectors correspond with the accumulator witnesses). A linearity check is a protocol between the prover and the verifier that convinces the verifier of this claim. Assuming the verifier is public-coin, this can be made non-interactive using random oracles.
Our goal is to construct a linearity check which does not require homomorphic vector commitments. Instead, we require vector commitments with local openings. Informally, these allow the prover to generate a succinct proof that the underlying committed vector’s -th element is some claimed value. For example, Merkle tree commitments support local openings with proof size .
Distance spot checks
The key tool that we will be relying on is a protocol for convincing the verifier that two committed vectors are at most a constant distance apart. Concretely, let and be commitments to vectors and respectively. We say that and are -far apart if they differ in at most locations. To show that and are at most -far apart, the prover and verifier engage in the following protocol:
-
1.
The verifier uniformly samples an index and sends it to the prover.
-
2.
The prover responds with the purported -th elements of and , along with opening proofs.
-
3.
The verifier accepts if these opening proofs are valid and the claimed elements are equal.
Clearly, if and are -far apart, then the verifier will reject with probability . For any constant , this soundness error can be made negligible with parallel repetitions.
Linearity spot checks
This protocol easily generalizes to testing any kind of element-wise property, and in particular we can use it to check that is -close to the “virtual vector” . Unfortunately, we need to ensure that the two vectors are equal at all locations. Suppose a cheating prover commits to a vector that only differs from at a single location . Detecting this would require repetitions (essentially opening the entire commitment), which violates the accumulation verifier’s efficiency requirement.
2.1.1 Error-resilient linearity checks from codes
It seems that we are at an impasse: our spot check can only guarantee that two vectors are -close, but the accumulation scheme requires exact agreement. To overcome this issue, we need to make our accumulation scheme resilient to a constant -fraction of corruptions. We do so by relying on linear codes, and in particular those which enjoy good distance properties, such as the Reed–Solomon code [47].
At a high level, we make the following changes to the accumulation scheme blueprint. Let be a linear code, and let be a constant which is smaller than the unique decoding radius of . The accumulator witness is a codeword , and the corresponding instance is a commitment to . The accumulation verifier checks that the output accumulator is -close to a random linear combination of the input accumulators by running the linearity spot check. Later, the decider will check that is the encoding of a good vector, and that commits to .
Knowledge soundness
We would like our linearity spot check to satisfy the following knowledge soundness property. Suppose a (possibly malicious) prover outputs commitments , , and which pass the check. Furthermore, suppose that commits to a vector . Then an efficient extractor can find vectors and such that (a) is -close to ; and (b) and commit to and . Notice that if we can extract vectors that satisfy 2.1.1, then our previous analysis of the spot check implies 2.1.1. Extraction turns out to be fairly straightforward: if we use Merkle commitments with a random oracle as the hash function, then we can find and by observing the prover’s random oracle queries [52].888To be precise, we must also consider a malicious prover that does not commit to a full vector; see the full version of the paper.
Returning to accumulation, suppose that a (possibly malicious) prover outputs , , and which pass the verifier’s and decider’s checks. Since the verifier runs the spot check, we can extract accumulator witnesses and such that is -close to . Since the decider accepts , we know that is a codeword . Similarly, we need and to be codewords in order for the decider to accept and . Unfortunately, this is simply not the case. For example, a cheating prover can always choose which agrees with a codeword at all but one location, and this will almost certainly go undetected.
Can we still say something meaningful about the extracted witnesses? We argue that intuitively, since and are (possibly correlated) random scalars, with high probability and are themselves -close to codewords. Moreover, and decode to and such that . This intuition can be formally proven using a suitable “proximity gap” result [6], which exists for a variety of parameter regimes.
The upshot is that we extract accumulators and which are only accepted by a relaxed decider. Namely, given an accumulator , this decider checks that commits to , and moreover that is -close to the code.
Recursive extraction
The foregoing analysis suffices for a single step of accumulation. However, in order to construct PCD, we will have to recursively extract from old accumulators. It is straightforward to see that a recursively extracted accumulator is only guaranteed to be -close to the code, since we are extracting from an accumulator that may already be -far from the code. More generally, steps of recursion will only guarantee accumulators that are -close to the code. We will see that once is larger than the unique decoding radius, extraction is no longer meaningful. In particular, this leads to the following concrete attack: a cheating prover can start with a bad codeword (rejected by the decider) and, over the accumulation steps, incrementally move it to a good codeword (accepted by the decider). This motivates our notion of “bounded-depth” accumulation, which is not captured by existing definitions [21].
2.2 Defining bounded-depth accumulation
To describe our construction which only supports accumulation up to a certain (constant) depth, we introduce a new, relaxed knowledge soundness property; the key differences are highlighted in blue. We say that an accumulation scheme has bounded-depth knowledge soundness (with maximum depth ) if there exists a family of deciders , where is equivalent to , such that the following holds. If and , then an efficient extractor can find and such that and .
This is a meaningful definition. In addition to generalizing standard knowledge soundness, which can be recovered by setting and using a single decider , it captures our construction based on error-resilient linearity checks: is the decider that only accepts if the accumulator is at most -far from the code, and in particular only accepts codewords. The depth bound is the maximum number of recursive extractions that we can perform before exceeds the unique decoding radius of the code.
2.3 Bounded-depth PCD from bounded-depth accumulation
Existing theorems that build PCD from accumulation [21, 13, 41] do not immediately translate to the bounded-depth setting. To see why, let us recall a simplified version of the construction from [21]. Suppose we have an accumulation scheme for a non-interactive argument of knowledge (NARK). Informally, a PCD proof for , which consists of a NARK proof and accumulator , certifies that , where is some initial value. We maintain the invariant that if and are valid, then the computation is correct up to the -th step.
-
The PCD prover receives a proof for , and wants to output a proof for . First, it accumulates and into a new accumulator , generating an accumulation proof . Next, it generates a NARK proof for the following claim, expressed as a circuit (see Figure 1): “, and there exists a NARK proof , old accumulator , and accumulation proof which correctly accumulate into .” The proof for is .
-
The PCD verifier checks a proof for by running the NARK verifier on and the decider on .
-
1.
Check that .
-
2.
Set .
-
3.
Check that .
Now suppose we replace the accumulation scheme with one that only has bounded-depth knowledge soundness. The construction remains the same, but we must provide a new soundness analysis.
PCD knowledge soundness
We need to construct an extractor which, given an accepting proof for , extracts a sequence of values such that for all . [21] gives the following strategy, which interleaves the NARK extractor and accumulation extractor. Suppose we have , , and . First, we invoke the NARK extractor to obtain . Second, we invoke the accumulation extractor to obtain . This gives us and , and the process continues. We maintain the invariant that in the -th step, and are valid.
With bounded-depth accumulation, we need to maintain a slightly weaker invariant: in the -th step, instead of requiring that is accepted by the strict decider , we only require that it is accepted by the -th relaxed decider . This discussion only provides a high-level overview of the proof strategy, and only describes an IVC construction; we describe the full PCD construction that supports arbitrary (bounded-depth) computation graphs, along with a full soundness analysis, in the full version of the paper.
Remark 2 (bounded-depth PCD suffices).
As presented, our PCD scheme supports up to steps of computation, where is the maximum depth of the accumulation scheme. We call this bounded-depth PCD. Since will realistically be a small constant, this seems to be of limited use: most computations require more than a constant number of steps! Fortunately, even such a limited PCD scheme can be used to construct IVC for any polynomial-length computation [10]. The idea is for the PCD prover to receive multiple proofs in each step, yielding a computation tree. In particular, if we can accumulate inputs and accumulators in a single step, then our PCD scheme can support computation trees of size . Setting and allows us to support polynomial-size computations.
Remark 3 (bounded-depth vs. constant-depth PCD).
Perhaps surprisingly, even with standard (unbounded) accumulation, [21] is only able to construct constant-depth PCD.999This is slightly different from bounded-depth PCD, where the maximum depth must be fixed in advance. This is because the size of the PCD extractor grows exponentially in the computation’s depth, regardless of the accumulation scheme’s knowledge soundness property. We remark that this limitation is largely theoretical: there is no known attack which exploits unbounded recursive proof composition. In contrast, the depth bound in bounded-depth PCD is not merely an artifact of the analysis: there exists a concrete attack that can be mounted against our construction when the depth exceeds . This means that the tree-based strategy described in Remark 2 is necessary for real-world implementations, unlike in prior work.
2.4 Constructing bounded-depth accumulation
Our starting point is the ProtoStar and ProtoGalaxy accumulation schemes [20, 34]. They support a general class of non-interactive arguments where consists of the NP witness along with a short vector commitment to the witness, and whose verifier performs three steps: (a) computing Fiat–Shamir challenges; (b) checking vector commitment openings; and (c) evaluating a polynomial over the instance, challenges, and openings. The key insight of ProtoStar and ProtoGalaxy is that the accumulation verifier can cheaply perform the first step, while batching the remaining steps and deferring it to the decider.
Let us recall ProtoGalaxy’s strategy [34], focusing on the case where we are trying to accumulate claims about satisfaction of R1CS, a popular NP language in the succinct argument literature. Recall that an R1CS instance consists of constraint matrices , and is said to be satisfied by a public input and witness if . In ProtoGalaxy, a NARK proof for an R1CS instance with public input would consist of the proof witness, which is just the R1CS witness , and the proof instance, which is just a vector commitment to . The polynomial evaluated by the verifier is of the form . Then, to accumulate R1CS claims, ProtoGalaxy’s accumulator witness is also a vector , and its accumulator instance consists of a vector , a (homomorphic) vector commitment to , and an error term . Thus, a NARK proof verification claim can be seen as a special case of the accumulator verification case where the error term is zero.
To accumulate such (accumulation or NARK) verification claims for accumulators , ProtoGalaxy relies on the following univariate polynomial identity:
where is the -th Lagrange polynomial of some -sized set , and is the vanishing polynomial on . For some quotient polynomial , this claim can be rewritten as
and this formulation allows the verifier to check the identity at a random point if the prover sends .
This allows us to accumulate the input claims as follows. The prover constructs a new accumulator whose new instance is , new witness vector is , and the new error is . The verifier checks that was computed correctly by homomorphically computing the new vector commitment, and directly computing the new instance and new error term. Finally, the accumulation decider completes the checks by asserting that that .101010Technically speaking, this check, as stated, is ill-formed, as the output of is an -sized vector, while is a single field element. However, this can be fixed by taking a random linear combinations of the output vector. ProtoStar and ProtoGalaxy show how to represent the latter task also as a low-degree polynomial evaluation.
Our construction
We follow the same strategy, except we replace homomorphic computations with error-resilient linearity checks. Concretely, we make the following changes. An accumulator witness is a codeword , and an accumulator instance consists of a vector commitment to and an error term . The decider accepts an accumulator if is the encoding of a vector such that . Finally, as discussed in Section 2.1, the verifier uses a linearity check to ensure that the new accumulator is sufficiently consistent with the old accumulators .
Extension to arbitrary linear codes
A key parameter in our construction is the linear code. We use Reed–Solomon codes because they display a proximity gap when the coefficients are Lagrange evaluations . However, a modified version of our construction extends to arbitrary linear codes. This is advantageous as these codes can have asymptotically faster encoding time compared to Reed–Solomon codes [37]. The construction can work with any proximity gap (e.g., uniformly random coefficients). The key idea is to commit to two codes within the accumulator, one for proximity checking and the other for the algebraic check using . See Section 4.3 for details.
Efficiency
The cost of the accumulation verifier is dominated by that of the linearity checker. Recall that to achieve negligible knowledge soundness error at depth , the latter checks that two code words are close via spot checks, where is such that is less than the unique decoding radius of the code. Overall, when instantiated with the Merkle tree-based vector commitment, the accumulation verifier checks Merkle tree paths. When applying this construction to the PCD scheme in Section 2.3, the latter cost becomes the recursive overhead of the PCD prover. As noted in Table 1, this cost is asymptotically better than the cost of prior SNARK-based PCD schemes [26].
A keen reader may notice that a disadvantage of our construction is that recursive overhead scales with the depth of the PCD computation graph. We now present several optimizations that reduce the depth of the PCD tree and significantly improve the efficiency of the resulting PCD scheme in practice.
2.5 Optimization: batch commitments
Recall from Remarks 2 and 3 that for our PCD construction, reducing the depth of the computation graph is essential for achieving provable security guarantees, and the standard depth-reduction technique for the case of IVC [10] works by constructing a PCD tree whose leaves comprise the actual computation being performed. To achieve constant computation depth, Bitansky et al. [10] set the arity of this tree to be super-constant (i.e., ). The per-node recursive overhead of our PCD construction in this setting is the cost of performing linearity checks on codewords of size , which costs hashes when using Merkle trees. We now describe how to reduce this to just hashes.
Recall that the linearity checker opens all codewords at the same locations. This means that for each spot-check, each of the Merkle trees is opened at the same leaf. We take advantage of this repetitive structure by committing to all codewords using a single Merkle tree whose -th leaf is the concatenation of the -th symbols of the codewords. Each spot-check now requires opening only a single Merkle tree path (and checking the leaf hash), leading to a cost of hashes for spot checks, as required.
However, this modification comes at a cost: it requires us to commit to all codewords together at the same time. While this is straightforward at the leaf layer, it gets more complex at higher layers. For example, even committing to a new accumulator now requires waiting for the batch of “sibling” new accumulators, which in turn means that we must wait for accumulations (each of size ) to complete before we can compute the commitments to the new accumulators. Overall, across the entire tree, this requires the PCD prover to maintain a state of “pending” accumulators. We provide more details in the full version of the paper.
2.6 Optimization: low-overhead IVC from accumulation
We give a generic optimization which improves on the PCD-to-IVC compiler of Bitansky et al. [52, 10]. They construct a (polynomial-length) IVC scheme for a function by using a (constant-depth) PCD scheme for a related function . In particular, computes (at leaf nodes) and performs consistency checks (at internal nodes). This is wasteful: even though we do not need to prove anything about at internal nodes, the PCD prover still generates a proof for (which is dominated by ).
We improve this compiler by constructing an IVC scheme with minimal overhead. The core idea is that we first construct a tree of accumulators for , i.e., a tree the leaf nodes are proofs for , and each parent accumulates its children. Then, we construct a PCD tree which proves that the accumulation tree was constructed correctly. When the PCD scheme is instantiated with our accumulation-based construction, the PCD circuit now checks two accumulation verifiers: one that checks the correctness of the accumulation tree, and one that helps check the correctness of the PCD tree.
Accumulating separately means that we no longer have to generate NARK proofs for at internal nodes. Additionally, because we only need to show that internal nodes of the accumulation tree were constructed correctly, our PCD tree has one fewer layer than before. This further reduces cost, in particular for higher arities (as grows, the leaf layer of a tree contains a higher fraction of nodes). We provide more details in the full version of this paper.
2.7 Optimization: PCD composition
The recursive overhead of our PCD scheme grows linearly with the maximum supported depth. This is contrast with SNARK-based PCD schemes like Fractal [26], which do not suffer from such an efficiency loss. However, these schemes pay a higher per-step cost anyway, and are thus asymptotically less efficient than our PCD scheme for low recursion depths.
We provide a generic optimization to combine SNARK-based PCD schemes with our PCD scheme to achieve a scheme that achieves better efficiency than either scheme alone. The core idea is to first use our accumulation-based PCD up to some depth , and then prove the PCD verifier for the latter with a SNARK-based PCD scheme on top. When invoked with tree PCD, this means that the SNARK-based PCD scheme is invoked only every steps. By choosing appropriately, the resulting scheme achieves better efficiency than either scheme alone. Furthermore, the resulting scheme supports arbitrary constant depth, as opposed to our accumulation-based scheme, which only supports an a priori fixed depth. We provide more details in the full version of this paper.
3 Bounded-depth accumulation
Let be a non-interactive argument where knowledge soundness (but not necessarily completeness) additionally holds for a “relaxed” verifier . Suppose proofs can be split into two parts, i.e., . Let denote , where is an instance of the relation; call this the verifier input instance. Let denote ; call this the verifier input witness. We write as shorthand for the verifier running on and . We write as shorthand for the tuple . An accumulation scheme in the random oracle model for is a tuple of algorithms with the following syntax and properties.
-
: On input a security parameter , the generator samples and outputs accumulation parameters .
-
: On input accumulation parameters and argument parameters , the indexer deterministically computes and outputs a proving key , verification key , and decision key .
-
: On input the proving key , verifier inputs , and accumulators , the accumulation prover outputs a new accumulator and proof . Here, and are fixed arities which may be functions of .
-
: On input the verifying key , verifier input instances , accumulator instances , new accumulator instance , and proof , the accumulation verifier outputs a bit indicating whether correctly accumulates the other instances.
-
: On input the decision key and accumulator , the decider outputs a bit indicating whether is a valid accumulator.
Completeness
For every (unbounded) adversary ,
To bootstrap accumulation, we also assume the prover can efficiently construct a dummy accumulator and proof, denoted , which the decider accepts.
Knowledge soundness
We say that has bounded-depth knowledge soundness (with maximum depth ) if there exists a family of deciders , where is equivalent to , such that the following holds. There exists an expected polynomial time extractor such that for every depth parameter , expected polynomial time adversary , and auxiliary input distribution , the following probability is negligibly close to :
The extractor implicitly receives as input.
Remark 4.
We can also define a bounded-depth version of completeness with a separate family of deciders. In the completeness definition, valid accumulators for the -th decider accumulate to a valid accumulator for the -th decider. This is important in settings where even an honest prover can only perform a bounded number of accumulations.
4 Constructing bounded-depth accumulation
We construct a bounded-depth accumulation scheme, which supports (possibly distinct) arities and , for a general class of non-interactive arguments. Across all of our constructions, we fix the following global constants: depth bound , code rate , and distance ; this guarantees that is smaller than the unique decoding radius of a Reed–Solomon code with rate . Additionally, we use domain separation on the random oracle to model three disjoint oracles: for Merkle trees and hashing, for the argument verifier’s randomness, and for the accumulation verifier’s randomness. When querying and , we assume the random oracle’s output is used to sample from the verifier’s challenge set. All security proofs are deferred to the full version of the paper.
Notation
If are vectors, we will often interpret tuples like as a vector consisting of concatenated together. Given a codeword , let denote the corresponding message which encodes to . We write as shorthand for the tuple and as shorthand for the Cartesian product . Similarly, let and . When querying locations in a codeword, let and . We use arrow notation as shorthand for tuples of commitment data, e.g., . Vector commitment functions map over tuples, e.g., should be interpreted as saying “for each , let .”
4.1 Non-interactive argument
Our starting point is any special-sound interactive proof with an algebraic verifier. By this, we mean that the verifier’s check can be expressed as a sequence of degree polynomials (derived from the index) which take the transcript as input. The verifier accepts if all of the polynomials evaluate to zero.
In more detail, we require an interactive proof for some indexed relation . Let be the number of rounds in the protocol; this may be a function of the index. For simplicity, we assume that the instance , the prover’s messages , and the verifier’s challenges are all vectors over ; their lengths may be a function of the index. From the index, the verifier derives degree polynomials over . It accepts if, for all , .
Compressing the verifier
Without loss of generality, we can assume that the algebraic verifier consists of a single polynomial. This is because multiple polynomial checks can be compressed into a single check, e.g., by using the following technique due to [34, 20].
Let be an interactive proof where the verifier’s check consists of polynomials , each of degree . We transform this into an interactive proof for the same relation, where the verifier’s check is a single polynomial
Here, is the unique degree polynomial satisfying for all . It follows that is of degree . The interactive protocol is the same as before, except that the verifier additionally samples a challenge where . The verifier accepts if , where is the transcript from the original proof.
If is -special-sound, then is -special-sound. To see why, suppose we have a tree of accepting transcripts for . Consider an arbitrary node in the penultimate layer of the tree. Its children correspond with transcripts of the form , each distinct. Since the transcripts are accepting, the degree univariate polynomial is zero at points. It follows that is the zero polynomial and, for all , . Let denote with its bottom layer removed. We have shown that is a tree of accepting transcripts for . Hence, we construct an extractor for by running the extractor for on .
:
-
1.
Choose a suitable field .
-
2.
Let .
-
3.
Output .
:
-
1.
Query .
-
2.
From and , derive the following parameters, collected into :
-
The number of rounds, denoted .
-
The length of the instance.
-
For each , the length, denoted , of the prover’s -th message.
-
For each , the format of the verifier’s -th challenge.
-
-
3.
From and , derive the verifier’s check .
-
4.
For each , let be a Reed–Solomon code over with dimension , rate , and evaluation domain .
-
5.
Output and .
:
-
1.
Compute the proving key according to .
-
2.
For :
-
Compute the prover’s -th message .
-
Encode to .
-
Let .
-
Query .
-
-
3.
Output .
:
-
1.
Compute the verification key according to .
-
2.
For each , query .
-
3.
Let .
-
4.
Verify that .
-
5.
Accept if .
:
-
1.
Compute the verification key according to .
-
2.
For each , query .
-
3.
Let .
-
4.
Find by uniquely decoding . If no such codeword exists, reject.
-
5.
Accept if .
Committing to messages
In order to achieve efficient accumulation, we will instead have the prover send commitments to messages. Only in the final move of the protocol does the prover send openings to all of the commitments. Strictly speaking, this is only special-sound for an “augmented relation”; namely, there exists an extractor which, given a tree of accepting transcripts, outputs either a witness or a break of the commitment scheme. Nonetheless, assuming the scheme is computationally binding, applying the Fiat-Shamir transformation yields a non-interactive argument of knowledge for the original relation. We refer to [20] for a more detailed analysis.
Removing interaction
Given a special-sound interactive proof , we apply the Fiat–Shamir transformation (with commitments) to get a non-interactive argument of knowledge [4]. In order to achieve efficient accumulation, we use a standard variant of the transformation where the index is first hashed to a succinct value . The Fiat–Shamir transform outputs a non-interactive argument , shown in Figure 2, for the indexed relation family . We also define an indexer as a helper algorithm.
Attema et al. [4] prove that multi-round public-coin protocols can be compiled into non-interactive arguments. However, their definition of knowledge-soundness is slightly different from ours: they require that the extractor succeeds with non-negligible probability if the adversary succeeds with non-negligible probability. Our definition, on the other hand, requires that the extractor succeeds with all but negligible probability whenever the adversary succeeds. However, these definitions are equivalent as one can boost the extractor’s success probability by running it until it succeeds. This works as the extractor is only required to run in expected polynomial time.
Relaxed verifier
We use a specific commitment scheme to support accumulation: the prover sends a vector commitment to a codeword of the message, along with the full auxiliary data. We relax the verifier in two different ways to get (also shown in Figure 2). First, we allow it to decode noisy codewords. Second, we allow it to accept partial openings to the vector commitment where the missing positions are interpreted as erasure errors. Because the verifier only accepts when the partial opening is decodable, the prover is still bound to a unique (decoded) vector; hence, knowledge soundness is not affected.
4.2 Accumulation scheme
Fix a subset of size ; this may be a function of . We construct an accumulation scheme for . The argument verifier inputs are split into and . Accumulators have a nearly identical structure; in fact, any verifier input can be converted into an accumulator by setting and , which is defined below.
:
-
1.
For each , query .
-
2.
Collect and into a vector .
-
3.
Output .
We also define a helper function which casts and into a single list of accumulator instances.
:
-
1.
Output .
Finally, we define helper functions which perform the bulk of proving and verification.
:
-
1.
For each , let .
-
2.
Compute the univariate polynomial of degree at most such that
-
3.
Query , where is a uniformly sampled field element.
-
4.
Compute .
-
5.
Compute .
-
6.
Let .
-
7.
Set and .
-
8.
Query , where is a uniformly sampled -sized subset of .
-
9.
For each , let .
-
10.
Let .
-
11.
Output and .
:
-
1.
Query .
-
2.
Query .
-
3.
For each , let . If contains , reject.
-
4.
Let . If contains , reject.
-
5.
Verify that .
-
6.
Verify that .
See Figure 3 for a full description of , along with the family of deciders.
:
-
1.
Choose a suitable spot check parameter .
-
2.
Output .
:
-
1.
Obtain from .
-
2.
Output , , and .
:
-
1.
Let .
-
2.
Parse .
-
3.
Output .
:
-
1.
Let .
-
2.
Accept if accepts.
:
-
1.
Let .
-
2.
Find by decoding . If no such codeword exists, reject.
-
3.
Accept if .
4.3 Using arbitrary linear codes
At first glance, our accumulation scheme does not use any special properties of the Reed–Solomon code. Because other codes might have desirable properties, e.g., linear-time encoding, this motivates the following question: can we instantiate it with an arbitrary linear code, so long as it has good distance? Recall that the accumulator’s vectors are a random linear combination of the input vectors. In particular, the coefficients are Lagrange evaluations of a random field element; this is necessary for compressing the polynomial evaluation claims. Unfortunately, existing proximity gaps for arbitrary linear codes [49, 1, 32] do not support this specific distribution of coefficients.
Separating the proximity claim
In some sense, our construction accumulates two distinct claims for the same vector: a polynomial evaluation claim, and a proximity claim. To overcome the foregoing issue, we modify our accumulation scheme to separate these claims. Specifically, each accumulator now holds two codewords: the first is a linear combination using Lagrange coefficients, and the second is a linear combination using proximity gap coefficients. Accordingly, the accumulation verifier uses the first codeword to compress evaluation claims, and the second codeword to maintain proximity. This will be roughly twice as expensive as before.
Definition 5 (Proximity gaps for linear codes).
Let be a linear code with relative distance and blocklength . We will treat elements of as functions . Let be a function that takes in randomness and outputs coefficients . We say that has a proximity gap with respect to distribution , error , and proximity bound if the following holds.
For any and arbitrary functions , if
then for all in the support of the distribution , we have
Furthermore, there exists such that for all in the support,
and in fact
Lemma 6.
Let be a linear code with relative distance , blocklength , and proximity gap with respect to distribution , error , and proximity bound . For and vectors , suppose the following holds:
Then there exists a subdomain and codewords such that the following holds. First, . Second, for all , and agree on .
Construction
We use linear codes with efficient unique decoding up to , the error bound of the proximity gap. Fix ; this guarantees that which is smaller than the unique decoding radius. The modified construction is shown below; all changes are highlighted in blue. We omit the description of unchanged algorithms from Figure 3.
:
-
1.
For each , query .
-
2.
Collect and into a vector .
-
3.
Output , where are commitments to zero vectors.
:
-
1.
For each , let , and .
-
2.
Compute the univariate polynomial of degree at most such that
-
3.
Query , where and are sampled uniformly.
-
4.
Compute .
-
5.
Compute .
-
6.
Let . Compute .
-
7.
Let , and .
-
8.
Set and .
-
9.
Query , where is a uniformly sampled -sized subset of .
-
10.
For each , let , and .
-
11.
Let , and .
-
12.
Output and .
:
-
1.
Query .
-
2.
Query .
-
3.
For each , let . If contains , reject.
-
4.
Let . If contains , reject.
-
5.
For each , let . If contains , reject.
-
6.
Let . If contains , reject.
-
7.
Verify that .
-
8.
Verify that .
-
9.
Let . Verify that .
:
-
1.
Let .
-
2.
Let .
-
3.
Find such that by decoding and . If no such codewords exists, reject.
-
4.
Accept if .
References
- [1] Scott Ames, Carmit Hazay, Yuval Ishai, and Muthuramakrishnan Venkitasubramaniam. Ligero: Lightweight sublinear arguments without a trusted setup. In Proceedings of the 24th ACM Conference on Computer and Communications Security, CCS ’17, pages 2087–2104, 2017. URL: https://eprint.iacr.org/2022/1608, doi:10.1145/3133956.3134104.
- [2] Gal Arnon, Alessandro Chiesa, Giacomo Fenzi, and Eylon Yogev. STIR: reed-solomon proximity testing with fewer queries. In Proceedings of the 44th Annual International Cryptology Conference, CRYPTO ’24, pages 380–413, 2024. URL: https://eprint.iacr.org/2024/390, doi:10.1007/978-3-031-68403-6_12.
- [3] Arasu Arun, Srinath T. V. Setty, and Justin Thaler. Jolt: Snarks for virtual machines via lookups. In Proceedings of the 43rd Annual International Conference on the Theory and Applications of Cryptographic Techniques, EUROCRYPT ’24, pages 3–33, 2024. URL: https://eprint.iacr.org/2023/1217, doi:10.1007/978-3-031-58751-1_1.
- [4] Thomas Attema, Serge Fehr, and Michael Klooß. Fiat-shamir transformation of multi-round interactive proofs (extended version). J. Cryptol., 36(4):36, 2023. URL: https://eprint.iacr.org/2021/1377, doi:10.1007/S00145-023-09478-Y.
- [5] Eli Ben-Sasson, Iddo Bentov, Ynon Horesh, and Michael Riabzev. Fast Reed–Solomon interactive oracle proofs of proximity. In Proceedings of the 45th International Colloquium on Automata, Languages and Programming, ICALP ’18, pages 14:1–14:17, 2018. doi:10.4230/LIPICS.ICALP.2018.14.
- [6] Eli Ben-Sasson, Dan Carmon, Yuval Ishai, Swastik Kopparty, and Shubhangi Saraf. Proximity gaps for reed-solomon codes. JACM, 70(5):31:1–31:57, 2023. URL: https://eprint.iacr.org/2020/654, doi:10.1145/3614423.
- [7] Eli Ben-Sasson, Alessandro Chiesa, and Nicholas Spooner. Interactive oracle proofs. In Proceedings of the 14th Theory of Cryptography Conference, TCC ’16-B, pages 31–60, 2016. URL: https://eprint.iacr.org/2016/116, doi:10.1007/978-3-662-53644-5_2.
- [8] Eli Ben-Sasson, Alessandro Chiesa, Eran Tromer, and Madars Virza. Scalable zero knowledge via cycles of elliptic curves. In Proceedings of the 34th Annual International Cryptology Conference, CRYPTO ’14, pages 276–294, 2014. URL: https://eprint.iacr.org/2014/595, doi:10.1007/978-3-662-44381-1_16.
- [9] Eli Ben-Sasson, Alessandro Chiesa, Eran Tromer, and Madars Virza. Scalable zero knowledge via cycles of elliptic curves. Algorithmica, 79(4):1102–1160, 2017. URL: https://eprint.iacr.org/2014/595, doi:10.1007/S00453-016-0221-0.
- [10] Nir Bitansky, Ran Canetti, Alessandro Chiesa, and Eran Tromer. Recursive composition and bootstrapping for SNARKs and proof-carrying data. In Proceedings of the 45th ACM Symposium on the Theory of Computing, STOC ’13, pages 111–120, 2013. URL: http://eprint.iacr.org/2012/095, doi:10.1145/2488608.2488623.
- [11] Dan Boneh and Binyi Chen. Latticefold: A lattice-based folding scheme and its applications to succinct proof systems. ePrint Report 2024/257, 2024. URL: https://eprint.iacr.org/2024/257.
- [12] Dan Boneh, Özgür Dagdelen, Marc Fischlin, Anja Lehmann, Christian Schaffner, and Mark Zhandry. Random oracles in a quantum world. In Proceedings of the 17th International Conference on the Theory and Application of Cryptology and Information Security, ASIACRYPT ’11, pages 41–69, 2011. URL: https://eprint.iacr.org/2010/428, doi:10.1007/978-3-642-25385-0_3.
- [13] Dan Boneh, Justin Drake, Ben Fisch, and Ariel Gabizon. Halo infinite: Proof-carrying data from additive polynomial commitments. In Proceedings of the 41st Annual International Cryptology Conference, CRYPTO ’21, 2021. URL: https://eprint.iacr.org/2020/1536.
- [14] Joseph Bonneau, Izaak Meckler, Vanishree Rao, and Evan Shapiro. Coda: Decentralized cryptocurrency at scale. ePrint Report 2020/352, 2020. URL: https://eprint.iacr.org/2020/352.
- [15] Jonathan Bootle, Andrea Cerulli, Jens Groth, Sune K. Jakobsen, and Mary Maller. Arya: Nearly linear-time zero-knowledge proofs for correct program execution. In Proceedings of the 24th International Conference on the Theory and Application of Cryptology and Information Security, ASIACRYPT ’18, pages 595–626, 2018. URL: https://eprint.iacr.org/2018/380, doi:10.1007/978-3-030-03326-2_20.
- [16] Jonathan Bootle, Alessandro Chiesa, and Jens Groth. Linear-time arguments with sublinear verification from tensor codes. In Proceedings of the 18th International Conference on the Theory of Cryptography, TCC ’20, pages 19–46, 2020. URL: https://eprint.iacr.org/2020/1426, doi:10.1007/978-3-030-64378-2_2.
- [17] Jonathan Bootle, Alessandro Chiesa, and Siqi Liu. Zero-knowledge iops with linear-time prover and polylogarithmic-time verifier. In Proceedings of the 41st Annual International Conference on the Theory and Applications of Cryptographic Techniques, EUROCRYPT ’22, pages 275–304, 2022. URL: https://eprint.iacr.org/2020/1527, doi:10.1007/978-3-031-07085-3_10.
- [18] Sean Bowe, Jack Grigg, and Daira Hopwood. Halo: Recursive proof composition without a trusted setup. ePrint Report 2019/1021, 2019. URL: https://eprint.iacr.org/2019/1021.
- [19] Elette Boyle, Ran Cohen, and Aarushi Goel. Breaking the -bit barrier: Byzantine agreement with polylog bits per party. J. Cryptol., 37(1):2, 2024. arXiv:2002.02516.
- [20] Benedikt Bünz and Binyi Chen. Protostar: Generic efficient accumulation/folding for special-sound protocols. In Proceedings of the 29th International Conference on the Theory and Application of Cryptology and Information Security, ASIACRYPT ’23, pages 77–110, 2023. URL: https://eprint.iacr.org/2023/620, doi:10.1007/978-981-99-8724-5_3.
- [21] Benedikt Bünz, Alessandro Chiesa, William Lin, Pratyush Mishra, and Nicholas Spooner. Proof-carrying data without succinct arguments. In Proceedings of the 41st Annual International Cryptology Conference, CRYPTO ’21, pages 681–710, 2021. URL: https://eprint.iacr.org/2020/1618, doi:10.1007/978-3-030-84242-0_24.
- [22] Benedikt Bünz, Alessandro Chiesa, Pratyush Mishra, and Nicholas Spooner. Proof-carrying data from accumulation schemes. In Proceedings of the 18th Theory of Cryptography Conference, TCC ’20, 2020. URL: https://eprint.iacr.org/2020/499.
- [23] Binyi Chen, Benedikt Bünz, Dan Boneh, and Zhenfei Zhang. Hyperplonk: Plonk with linear-time prover and high-degree custom gates. In Proceedings of the 42nd Annual International Conference on the Theory and Applications of Cryptographic Techniques, EUROCRYPT ’23, pages 499–530, 2023. URL: https://eprint.iacr.org/2022/1355, doi:10.1007/978-3-031-30617-4_17.
- [24] Weikeng Chen, Alessandro Chiesa, Emma Dauterman, and Nicholas P. Ward. Reducing participation costs via incremental verification for ledger systems. ePrint Report 2020/1522, 2020. URL: https://eprint.iacr.org/2020/1522.
- [25] Alessandro Chiesa, Peter Manohar, and Nicholas Spooner. Succinct arguments in the quantum random oracle model. In Proceedings of the 17th International Conference on the Theory of Cryptography, TCC ’19, pages 1–29, 2019. URL: https://eprint.iacr.org/2019/834, doi:10.1007/978-3-030-36033-7_1.
- [26] Alessandro Chiesa, Dev Ojha, and Nicholas Spooner. Fractal: Post-quantum and transparent recursive proofs from holography. In Proceedings of the 39th Annual International Conference on the Theory and Applications of Cryptographic Techniques, EUROCRYPT ’20, 2020. URL: https://eprint.iacr.org/2019/1076.
- [27] Alessandro Chiesa and Eran Tromer. Proof-carrying data and hearsay arguments from signature cards. In Proceedings of the 1st Symposium on Innovations in Computer Science, ICS ’10, pages 310–331, 2010. URL: https://people.eecs.berkeley.edu/˜alexch/docs/CT10.pdf.
- [28] Alessandro Chiesa, Eran Tromer, and Madars Virza. Cluster computing in zero knowledge. In Proceedings of the 34th Annual International Conference on Theory and Application of Cryptographic Techniques, EUROCRYPT ’15, pages 371–403, 2015. URL: https://eprint.iacr.org/2015/377, doi:10.1007/978-3-662-46803-6_13.
- [29] Stephen Chong, Eran Tromer, and Jeffrey A. Vaughan. Enforcing language semantics using proof-carrying data. ePrint Report 2013/513, 2013. URL: http://eprint.iacr.org/2013/513.
- [30] Michel Dellepere, Pratyush Mishra, and Alireza Shirzad. Garuda and pari: Faster and smaller snarks via equifficient polynomial commitments. ePrint Report 2024/1245, 2024. URL: https://eprint.iacr.org/2024/1245.
- [31] Benjamin E. Diamond and Jim Posen. Succinct arguments over towers of binary fields. ePrint Report 2023/1784, 2023. URL: https://eprint.iacr.org/2023/1784.
- [32] Benjamin E. Diamond and Jim Posen. Proximity testing with logarithmic randomness. IACR Commun. Cryptol., 1(1):2, 2024. URL: https://eprint.iacr.org/2023/630, doi:10.62056/AKSDKP10.
- [33] Erez Druk and Yuval Ishai. Linear-time encodable codes meeting the gilbert-varshamov bound and their cryptographic applications. In Proceedings of the 5th Innovations in Theoretical Computer Science Conference, ITCS ’14, pages 169–182, 2014. URL: https://dl.acm.org/doi/pdf/10.1145/2554797.2554815, doi:10.1145/2554797.2554815.
- [34] Liam Eagen and Ariel Gabizon. Protogalaxy: Efficient protostar-style folding of multiple instances. ePrint Report 2023/1106, 2023. URL: https://eprint.iacr.org/2023/1106.
- [35] Ariel Gabizon and Zachary J. Williamson. The turbo-plonk program syntax for specifying snark programs, 2019. URL: https://docs.zkproof.org/pages/standards/accepted-workshop3/proposal-turbo_plonk.pdf.
- [36] Ariel Gabizon and Zachary J. Williamson. plookup: A simplified polynomial protocol for lookup tables. ePrint Report 2020/315, 2020. URL: https://eprint.iacr.org/2020/315.
- [37] Alexander Golovnev, Jonathan Lee, Srinath T. V. Setty, Justin Thaler, and Riad S. Wahby. Brakedown: Linear-time and field-agnostic snarks for R1CS. In Proceedings of the 43rd Annual International Cryptology Conference, CRYPTO ’23, pages 193–226, 2023. URL: https://eprint.iacr.org/2021/1043, doi:10.1007/978-3-031-38545-2_7.
- [38] Assimakis Kattis and Joseph Bonneau. Proof of necessary work: Succinct state verification with fairness guarantees. ePrint Report 2020/190, 2020. URL: https://eprint.iacr.org/2020/190.
- [39] Abhiram Kothapalli and Srinath Setty. Cyclefold: Folding-scheme-based recursive arguments over a cycle of elliptic curves. ePrint Report 2023/1192, August 2023. URL: https://eprint.iacr.org/2023/1192.
- [40] Abhiram Kothapalli and Srinath T. V. Setty. Hypernova: Recursive arguments for customizable constraint systems. In Proceedings of the 44th Annual International Cryptology Conference, CRYPTO ’24, pages 345–379, 2024. URL: https://eprint.iacr.org/2023/573, doi:10.1007/978-3-031-68403-6_11.
- [41] Abhiram Kothapalli, Srinath T. V. Setty, and Ioanna Tzialla. Nova: Recursive zero-knowledge arguments from folding schemes. In Proceedings of the 42nd Annual International Cryptology Conference, CRYPTO ’22, pages 359–388, 2022. URL: https://eprint.iacr.org/2021/370, doi:10.1007/978-3-031-15985-5_13.
- [42] Assa Naveh and Eran Tromer. PhotoProof: Cryptographic image authentication for any set of permissible transformations. In Proceedings of the 37th IEEE Symposium on Security and Privacy, S&P ’16, pages 255–271, 2016. URL: https://ieeexplore.ieee.org/document/7546506, doi:10.1109/SP.2016.23.
- [43] Wilson D. Nguyen, Dan Boneh, and Srinath T. V. Setty. Revisiting the nova proof system on a cycle of curves. In Proceedings of the 5th Conference on Advances in Financial Technologies, AFT ’23, pages 18:1–18:22, 2023. URL: https://eprint.iacr.org/2023/969, doi:10.4230/LIPICS.AFT.2023.18.
- [44] O(1) Labs. Mina Cryptocurrency, 2020. minaprotocol.org.
- [45] Torben P. Pedersen. Non-interactive and information-theoretic secure verifiable secret sharing. In Proceedings of the 11th Annual International Cryptology Conference, CRYPTO ’91, pages 129–140, 1992. URL: https://link.springer.com/content/pdf/10.1007/3-540-46766-1_9.pdf.
- [46] Polygon Zero Team. Plonky2: Fast recursive arguments with plonk and fri. URL: https://github.com/0xPolygonZero/plonky2/blob/main/plonky2/plonky2.pdf.
- [47] Irving S. Reed, Gustave Solomon, and Kim Hamilton March. Polynomial codes over certain finite fields. Journal of The Society for Industrial and Applied Mathematics, 8:300–304, 1960.
- [48] Omer Reingold, Guy N. Rothblum, and Ron D. Rothblum. Constant-round interactive proofs for delegating computation. SIAM J. Comp., 50(3), 2021. doi:10.1137/16M1096773.
- [49] Guy N. Rothblum, Salil P. Vadhan, and Avi Wigderson. Interactive proofs of proximity: delegating computation in sublinear time. In Proceedings of the 45th ACM Symposium on the Theory of Computing, STOC ’13, pages 793–802, 2013. URL: https://privacytools.seas.harvard.edu/files/privacytools/files/stoc283fp-rothblum.pdf. doi:10.1145/2488608.2488709.
- [50] Daniel A. Spielman. Linear-time encodable and decodable error-correcting codes. IEEE Trans. on Inf. Theory, 42(6):1723–1731, 1996. URL: http://cs.yale.edu/homes/spielman/PAPERS/linearTimeIT.pdf, doi:10.1109/18.556668.
- [51] StarkWare. ethstark documentation. ePrint Report 2021/582, 2021. URL: https://eprint.iacr.org/2021/582.
- [52] Paul Valiant. Incrementally verifiable computation or proofs of knowledge imply time/space efficiency. In Proceedings of the 5th Theory of Cryptography Conference, TCC ’08, pages 1–18, 2008. URL: https://iacr.org/archive/tcc2008/49480001/49480001.pdf, doi:10.1007/978-3-540-78524-8_1.