Abstract 1 Introduction 2 Techniques 3 Bounded-depth accumulation 4 Constructing bounded-depth accumulation References

Accumulation Without Homomorphism

Benedikt Bünz ORCID New York University, NY, USA Pratyush Mishra ORCID University of Pennsylvania, Philadelphia, PA, USA Wilson Nguyen Stanford University, CA, USA William Wang ORCID New York University, NY, USA
Abstract

Accumulation schemes are a simple yet powerful primitive that enable highly efficient constructions of incrementally verifiable computation (IVC). Unfortunately, all prior accumulation schemes rely on homomorphic vector commitments whose security is based on public-key assumptions. It is an interesting open question to construct efficient accumulation schemes that avoid the need for such assumptions.

In this paper, we answer this question affirmatively by constructing an accumulation scheme from non-homomorphic vector commitments which can be realized from solely symmetric-key assumptions (e.g., Merkle trees). We overcome the need for homomorphisms by instead performing spot-checks over error-correcting encodings of the committed vectors.

Unlike prior accumulation schemes, our scheme only supports a bounded number of accumulation steps. We show that such bounded-depth accumulation still suffices to construct proof-carrying data (a generalization of IVC). We also demonstrate several optimizations to our PCD construction which greatly improve concrete efficiency.

Keywords and phrases:
Proof-carrying data, incrementally verifiable computation, accumulation schemes
Funding:
Benedikt Bünz: Supported by Alpen Labs, Chaincode Labs, and the Sui Foundation.
Wilson Nguyen: Supported by NSF, DARPA, the Simons Foundation, UBRI, NTT Research, and the Stanford Future of Digital Currency Initiative (FDCI). Opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA.
Copyright and License:
[Uncaptioned image] © Benedikt Bünz, Pratyush Mishra, Wilson Nguyen, and William Wang; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Cryptographic protocols
Related Version:
Full Version: https://eprint.iacr.org/2024/474
Editor:
Raghu Meka

1 Introduction

Proof-carrying data (PCD) [27] is a powerful cryptographic primitive that enables mutually distrustful parties to perform distributed computations that run indefinitely, while ensuring that the correctness of every intermediate step can be verified efficiently. PCD is a generalization of the prior notion of incrementally verifiable computation (IVC) [52].111IVC is the special case of PCD where the distributed computation graph is a line.

PCD has found numerous applications in both theory and practice, including enforcing language semantics [29], complexity-preserving succinct arguments [10, 9], verifiable MapReduce computations [28], image provenance [42], and consensus protocols and blockchains [44, 38, 14, 24, 19]. It is thus a key question to understand security assumptions that PCD constructions require, as well as the efficiency that they can attain under these assumptions.

Let us review how existing PCD constructions fare along both dimensions. For simplicity, we focus on the special case of IVC in the following discussion.

PCD from succinctly verifiable arguments

The standard construction of PCD is via recursive composition of succinct non-interactive arguments of knowledge (SNARKs) [10, 8, 9, 26]. Informally, to prove a t-step computation, the PCD prover proves that the t-th step is correct, and there exists a valid proof for the first t1 steps. The state-of-the-art works in this line follow the template of Fractal [26] by relying on SNARKs in the random oracle model.222The concrete PCD construction makes non-black-box use of the SNARK verifier, which requires us to heuristically instantiate the random oracle. This means that we can achieve PCD (heuristically) from only symmetric-key cryptography, but at the cost of relying on the existence of SNARKs, which are complex to construct and incur (asymptotically and concretely) high costs for the PCD prover.

PCD from accumulation

A recent popular approach to avoid this reliance on SNARKs is to construct PCD via accumulation schemes [22, 21, 41]. Roughly speaking, instead of recursively checking proofs as above, the PCD prover “accumulates” the proof for each step into a running accumulator, and then the PCD verifier performs a single expensive check on the final accumulator. This line of work has led to simple and efficient PCD schemes that incur low costs for the PCD prover, and so has seen much interest in new constructions and deployments [21, 41, 20, 34, 39, 40]. Unfortunately, all known accumulation schemes rely on homomorphic vector commitments that are only known to exist under public-key assumptions. Additionally, because most existing homomorphic vector commitments are not known to achieve post-quantum security, the resulting accumulation schemes are also quantum-insecure.

Our question

We are thus left in an unsatisfactory state of affairs: on the one hand we have PCD from ROM-based SNARKs that (heuristically) relies on the minimal symmetric-key assumptions, but incurs high PCD prover costs due to this reliance on SNARKs, while on the other hand we have PCD from accumulation that relies on public-key assumptions, but achieves comparatively lower PCD prover costs by avoiding SNARKs. This motivates the questions we tackle in this paper: can we design PCD that achieves low prover costs by avoiding SNARKs, while simultaneously minimizing assumptions?

1.1 Our contributions

We answer this question positively by constructing accumulation schemes solely in the random-oracle model. Our constructions satisfy a weaker notion of accumulation than that considered in prior work, but we show that this weaker notion still suffices to construct PCD (again, heuristically after instantiating the random oracle). We provide details below.

(1) Bounded-depth accumulation

We introduce a new notion of bounded-depth accumulation schemes that can only provide (knowledge) soundness guarantees for a limited number of consecutive accumulation steps. In contrast, prior accumulation schemes support an unbounded number of accumulation steps.

(2) PCD from bounded-depth accumulation

We show that this weaker notion of accumulation still suffices to construct (a variant of) proof-carrying data. In particular, we show that bounded-depth accumulation suffices to construct PCD for computation graphs of a-priori bounded depth O(1), which is already sufficient for many applications, including the primary application of constructing polynomial-length IVC [10].

We note that, like our construction, all prior PCD constructions only provably support computation graphs of bounded depth [10]. However, unlike these prior works, our construction is vulnerable to attacks when the depth of the computation graph exceeds an a priori fixed constant. See Remarks 2 and 3 for details.

(3) Efficient constructions of bounded-depth accumulation

We construct efficient bounded-depth accumulation schemes from any (non-homomorphic) vector commitment scheme (e.g., random-oracle based Merkle trees) and any linear code.

Compared to the prior state-of-the-art accumulation schemes, our construction has several advantages beyond just avoiding public-key assumptions, such as true linear time for the accumulation prover,333 To accumulate proofs for circuits of size n, prior accumulation schemes require performing an n-sized multi-scalar multiplication over elliptic-curve groups. As explained in prior work [37], this operation requires ω(n) group operations, and is hence not truly linear time. and plausible post-quantum security.444We only claim plausible post-quantum security, as we prove our construction in the random oracle model, instead of the quantum random oracle model [12]. We believe that our results can be extended to the latter model by applying techniques from prior work [25], but leave this to future work. (We note that both our construction and prior accumulation schemes rely on the random oracle model; it is an open question to construct an accumulation scheme in the standard model.)

(4) Efficiency of and optimizations for instantiated PCD

Instantiating our generic PCD construction with our accumulation scheme leads to PCD schemes with numerous prover efficiency benefits compared to prior work. We also provide several optimizations for this scheme, including support for “batch” accumulation, a new low-overhead compiler from low-depth PCD to IVC, and a new hybrid PCD scheme that combines our low-depth PCD with any SNARK-based PCD scheme to achieve the best of both worlds. We detail the impact of these optimizations in Table 1.

Our scheme also has prover efficiency benefits beyond those captured by Table 1. To elaborate on these, we briefly detail the key factors that contribute to PCD prover cost. In all practical PCD constructions, to produce a proof for the next step of the distributed computation, the PCD prover invokes an underlying cryptographic proof system to prove satisfaction of a circuit C𝖯𝖢𝖣 that contains a circuit representation CΦ of the computation step (or PCD predicate Φ), and also performs other checks required by the PCD construction. Thus, PCD prover efficiency is determined by the prover efficiency of this proof system, and by the cost |C𝖯𝖢𝖣||CΦ| of the additional checks. We call the former the proof-system overhead, and the latter the PCD circuit overhead.

PCD constructions based on ROM-based SNARKs have been able to achieve best-in-class proof-system overhead via extensive asymptotic and concrete optimizations (e.g., by achieving truly linear-time provers [15, 16, 17, 37], or relying on small fields [31]), but suffer from high PCD circuit overhead as the SNARK verifier subcircuit must perform numerous expensive random oracle calls.

Accumulation-based PCD constructions, on the other hand, have the smallest PCD circuit overhead (e.g., just ten thousand gates for Nova [41] vs. over one million gates for Fractal [26]). They also try to lower proof-system overhead by avoiding some cryptographic work (e.g., by requiring commitments only to the witness [20]), and via efficient arithmetizations of predicates (e.g., via cheap custom gates555Custom gates enforce checks beyond simple addition and multiplication. Examples include high-degree polynomial gates [35] and lookup gates [15, 36]. See [23, 30, 3] for empirical evidence of the benefits of such gates.). However, the remaining cryptographic work is quite expensive as it requires public-key operations, and hence the proof-system overhead of these constructions is relatively high.

Our construction of bounded-depth accumulation achieves the best of both worlds: it enjoys the low PCD circuit overhead of accumulation-based PCD constructions while taking advantage of the proof-system optimizations enjoyed by both approaches. In fact, the combination opens up new efficiency improvements that are not possible in either of the two paradigms alone. For example, all known ROM-SNARK-based PCD constructions thus far have relied on Reed–Solomon codes, and incur quasilinear prover costs from the quasilinear encoding time of these codes. Our construction, on the other hand, allows the use of any linear code, including linear-time-encodable codes [50, 33, 37], thus reducing prover costs.

Table 1: Comparison of IVC schemes constructed from PCD over a tree of depth d and arity m. All costs omit constant factors. The circuit overhead measures |C𝖯𝖢𝖣||CΦ|. The table displays the cost per invocation of Φ. Above n is the size of the recursive circuit, n=O(dmn) is the circuit size of the accumulation decider for the recursive circuit, |C𝖬𝖳| is the circuit size of checking a membership proof in a Merkle Tree with n leaves. T𝖬𝖳 is the time it takes to verify a Merkle Tree opening. Note that Fractal supports IVC computations of unbounded length, whereas all but the hybrid construction presented in this paper have a length bound of md.
scheme circuit overhead per step IVC verifier
Fractal [26] λlogn|C𝖬𝖳| λlognT𝖬𝖳
+ STIR [2] (concurrent) (logn+λloglogn)|C𝖬𝖳| (logn+λloglogn)T𝖬𝖳
this paper dλ|C𝖬𝖳| dn
+ batch comm. (Sec 2.5) dλm|C𝖬𝖳| dmn
+ low-overhead IVC (Sec 2.6) dλm2|C𝖬𝖳| dmn
+ hybrid (Sec 2.7) (dλm2+λlognmd)|C𝖬𝖳| λlognT𝖬𝖳

1.2 Related work

PCD from symmetric-key assumptions

As noted in Section 1, the only end-to-end construction of PCD from symmetric-key assumptions is that of Chiesa, Ojha, and Spooner [26]. We provide a quantitative comparison in Table 1, and focus here on a qualitative comparison. Their construction is based on the Fractal SNARK, which they prove secure in the random oracle model.666Like us, their PCD construction must instantiate the random oracle with a concrete hash function, and can hence only achieve heuristic security.

Boneh, Drake, Fisch, and Gabizon [13] propose an optimization of the foregoing approach that batches the most expensive component, the low-degree test, across multiple proofs. While this concretely reduces prover cost, it does not lead to an asymptotic improvement in the prover overhead.

Like Fractal and similar SNARKs, our construction is able to take advantage of recent advances in the design of efficient code-based Interactive Oracle Proofs (IOPs) [7, 48]. For example, like recent work [51, 46, 31], we can greatly improve efficiency by relying on extension fields of small characteristic. Furthermore, unlike existing works, our fields do not need to have any special algebraic structure (e.g., large multiplicative subgroups).

PCD from public-key assumptions

Except the foregoing, all existing concretely-efficient IVC/PCD constructions [9, 18, 22, 21, 13, 41, 39, 20, 34, 40] rely on public-key assumptions, and in particular rely on the hardness of computing discrete logarithms over elliptic curve groups, which forces the usage of cryptographically large fields. Furthermore, efficient implementations require cycles of elliptic curves, which have proved unwieldy to implement correctly in practice [43]. In comparison, our construction avoids the need for public-key assumptions and this additional algebraic structure, and is able to use non-cryptographic field sizes.

Concurrent work

The concurrent work LatticeFold [11] takes a complementary approach to constructing plausibly post-quantum accumulation-based PCD: it constructs an accumulation/folding scheme from lattice-based assumptions. Unlike our construction, their accumulation scheme directly supports unbounded depth, allowing it to serve as a drop-in replacement in many existing constructions of accumulation-based PCD. However, this comes at a cost: at each accumulation step, the prover must demonstrate that the accumulation result has a small norm, which incurs a non-trivial computational cost. Furthermore, the dependence on lattice-based assumptions means that their scheme still relies on public-key assumptions.

An interesting direction for future work would be to combine the two approaches to reduce the need for small-norm checks. For example, one could construct a more efficient PCD scheme by first designing a bounded-depth accumulation scheme that relies on the bounded homomorphism supported by lattice commitments. One could then use the transformation of Section 2.7 to combine the resulting (bounded-depth) PCD scheme with the (unbounded-depth) LatticeFold PCD scheme to achieve a more efficient construction than either scheme alone.

Another concurrent work, STIR [2], provides a new proximity test that can be used as a drop-in replacement for the proximity test [5] used in Fractal [26]. As noted in Table 1, our schemes still have lower recursion overhead than this optimized baseline. Furthermore, this new STIR + Fractal-based PCD should also be compatible with our hybrid construction from Section 2.7.

 Remark 1 (security of bounded-depth PCD).

All PCD schemes (including ours) only provably support computation graphs of depth O(1). However, while there are no known attacks that break the security of prior schemes when the depth is ω(1), the same is not true for our scheme. As we explain in Section 2.1, our scheme is vulnerable to a relatively straightforward attack that obviates any security guarantees when the depth of the computation graph exceeds an a priori fixed constant. We emphasize that even such bounded-depth PCD is already powerful enough to support many interesting applications, including the primary application of constructing polynomial-length IVC [10]. See Remarks 2 and 3 for a more detailed discussion.

2 Techniques

We begin by reviewing the definition of an accumulation scheme [22, 21].777We restrict our presentation to split accumulation schemes, as defined in [21]. At a high level, it is used to perform batch verification of a predicate which, for us, will be a non-interactive argument’s verifier 𝒱. In other words, an accumulation scheme is used to check that 𝒱(x1,π1),,𝒱(xn,πn) all accept, more efficiently than the naive approach of individually verifying each instance xi and proof πi.

The workflow of an accumulation scheme is as follows. There are three main algorithms: a prover P, verifier V, and decider D. The prover is initialized with an empty accumulator 𝖺𝖼𝖼0, which is used to accumulate an input (x1,π1) into a new accumulator 𝖺𝖼𝖼1. The prover additionally outputs a proof; we write this as (𝖺𝖼𝖼1,𝗉𝖿1)P(x1,π1,𝖺𝖼𝖼0). Later, 𝖺𝖼𝖼1 can be used to accumulate a second input, i.e., (𝖺𝖼𝖼2,𝗉𝖿2)P(x2,π2,𝖺𝖼𝖼1), and so on. The correctness of a sequence of accumulations can then be established by checking that: (a) each accumulation step is valid, i.e., V(xi,πi,𝖺𝖼𝖼i1,𝖺𝖼𝖼i,𝗉𝖿i)=1; and (b) the final accumulator is valid, i.e., D(𝖺𝖼𝖼n)=1.

Notice that the decider only acts on the final accumulator, whereas the verifier acts on each accumulation step. Therefore, the crux of an accumulation scheme is making verification as cheap as possible. Towards this, we require that an accumulator 𝖺𝖼𝖼 can be split into a short instance part 𝖺𝖼𝖼.x and a (possibly) long witness part 𝖺𝖼𝖼.w; we use 𝖺𝖼𝖼=(𝖺𝖼𝖼.x,𝖺𝖼𝖼.w) as shorthand. Similarly, we require that an argument proof can be split into instance and witness parts π=(π.x,π.w). The point is that the verifier can only look at the instance parts; we write this as V(xi,πi.x,𝖺𝖼𝖼i1.x,𝖺𝖼𝖼i.x,𝗉𝖿i).

Definition

An accumulation scheme must satisfy the following properties.

  • Completeness: The honest accumulation (𝖺𝖼𝖼,𝗉𝖿)P(x,π,𝖺𝖼𝖼) of any valid input and accumulator should pass both the verifier’s and decider’s checks. That is, if 𝒱(x,π)=1 and D(𝖺𝖼𝖼)=1, then V(x,π.x,𝖺𝖼𝖼.x,𝖺𝖼𝖼.x,𝗉𝖿)=1 and D(𝖺𝖼𝖼)=1.

  • Knowledge soundness: If a new accumulator 𝖺𝖼𝖼 passes the verifier’s and decider’s checks, then an efficient extractor can find a valid input and old accumulator that explains 𝖺𝖼𝖼. That is, if D(𝖺𝖼𝖼)=1 and V(x,π.x,𝖺𝖼𝖼.x,𝖺𝖼𝖼.x,𝗉𝖿)=1, then an efficient extractor can find the witness part of the proof, π.w, and the witness part of the old accumulator, 𝖺𝖼𝖼.w, such that 𝒱(x,π)=1 and D(𝖺𝖼𝖼)=1.

  • Efficiency: The cost of running the accumulation verifier n times plus the cost of running the accumulation decider once should be lower than the cost of running the argument verifier n times.

Accumulation schemes can be generalized to handle multiple inputs and accumulators in each step. For example, the prover’s syntax would be P([xi,πi]i=1m1,[𝖺𝖼𝖼i]i=1m2), where m1 and m2 are the arities; see Section 3 for a comprehensive definition.

Prior constructions

All prior accumulation schemes [22, 21, 41, 20, 34, 39, 40] crucially use additively homomorphic vector commitment schemes. Informally, a vector commitment scheme allows one to construct a succinct commitment 𝖼𝗆 to a vector 𝐯Fn. The scheme is additively homomorphic if, given 𝖼𝗆1=𝖢𝗈𝗆𝗆𝗂𝗍(𝐯1) and 𝖼𝗆2=𝖢𝗈𝗆𝗆𝗂𝗍(𝐯2), 𝖼𝗆3=α𝖼𝗆1+β𝖼𝗆2 is a commitment to α𝐯1+β𝐯2. We remark that all known additively homomorphic vector commitment schemes, e.g., Pedersen commitments [45], rely on public-key assumptions.

The general blueprint for an accumulation scheme is as follows. An accumulator witness 𝖺𝖼𝖼.wFn is a vector, and the corresponding instance 𝖺𝖼𝖼.x is a commitment to 𝖺𝖼𝖼.w. For simplicity, suppose the prover claims that 𝖺𝖼𝖼1 and 𝖺𝖼𝖼2 accumulate into 𝖺𝖼𝖼3. Roughly speaking, we want to guarantee that the output accumulator is a random linear combination of the input accumulators. The verifier checks this by computing the linear combination of the input commitments 𝖺𝖼𝖼1.x and 𝖺𝖼𝖼2.x, and checking that the result equals the output commitment 𝖺𝖼𝖼3.x [41, 20]. Later, the decider will check that 𝖺𝖼𝖼3.w is a “good” vector, and that 𝖺𝖼𝖼3.x commits to 𝖺𝖼𝖼3.w. Since commitments are binding, 𝖺𝖼𝖼3 must be the correct linear combination of 𝖺𝖼𝖼1 and 𝖺𝖼𝖼2.

We have omitted many details, most notably how to accumulate argument proofs. However, from this description alone we can observe two key properties of the vector commitment scheme. First, it has succinct commitments; this allows the verifier to be efficient. Second, it is additively homomorphic; this allows the verifier to perform meaningful checks. As noted earlier, this combination of properties unfortunately seems to require public-key assumptions.

2.1 Checking linearity

To overcome the foregoing limitation, we make a key observation: to verify that the output accumulator is a linear combination of the input accumulators, it is not necessary to directly compute a linear combination of the input commitments. Instead, it suffices to check that the output accumulator commits to a vector that is a linear combination of the vectors committed by the input accumulators. This idea is a natural one, and has appeared before under the name of “linear combination schemes” [13].

Recall that we want to check that 𝖺𝖼𝖼3.w=α𝖺𝖼𝖼1.w+β𝖺𝖼𝖼2.w, where α,βF are previously chosen scalars. More precisely, we want to check that 𝐯3=α𝐯1+β𝐯2, where 𝐯1, 𝐯2, and 𝐯3 are the underlying committed vectors of 𝖺𝖼𝖼1.x, 𝖺𝖼𝖼2.x, and 𝖺𝖼𝖼3.x (and since commitments are binding, these vectors correspond with the accumulator witnesses). A linearity check is a protocol between the prover and the verifier that convinces the verifier of this claim. Assuming the verifier is public-coin, this can be made non-interactive using random oracles.

Our goal is to construct a linearity check which does not require homomorphic vector commitments. Instead, we require vector commitments with local openings. Informally, these allow the prover to generate a succinct proof that the underlying committed vector’s i-th element is some claimed value. For example, Merkle tree commitments support local openings with proof size O(λlogn).

Distance spot checks

The key tool that we will be relying on is a protocol for convincing the verifier that two committed vectors are at most a constant distance apart. Concretely, let 𝖼𝗆1 and 𝖼𝗆2 be commitments to vectors 𝐯1 and 𝐯2 respectively. We say that 𝐯1 and 𝐯2 are δ-far apart if they differ in at most δn locations. To show that 𝐯1 and 𝐯2 are at most δ-far apart, the prover and verifier engage in the following protocol:

  1. 1.

    The verifier uniformly samples an index i[n] and sends it to the prover.

  2. 2.

    The prover responds with the purported i-th elements of 𝐯1 and 𝐯2, along with opening proofs.

  3. 3.

    The verifier accepts if these opening proofs are valid and the claimed elements are equal.

Clearly, if 𝐯1 and 𝐯2 are δ-far apart, then the verifier will reject with probability δ. For any constant δ>0, this soundness error can be made negligible with Θ(λ) parallel repetitions.

Linearity spot checks

This protocol easily generalizes to testing any kind of element-wise property, and in particular we can use it to check that 𝐯3 is δ-close to the “virtual vector” α𝐯1+β𝐯2. Unfortunately, we need to ensure that the two vectors are equal at all locations. Suppose a cheating prover commits to a vector that only differs from α𝐯1+β𝐯2 at a single location j. Detecting this would require Θ(λn) repetitions (essentially opening the entire commitment), which violates the accumulation verifier’s efficiency requirement.

2.1.1 Error-resilient linearity checks from codes

It seems that we are at an impasse: our spot check can only guarantee that two vectors are δ-close, but the accumulation scheme requires exact agreement. To overcome this issue, we need to make our accumulation scheme resilient to a constant δ-fraction of corruptions. We do so by relying on linear codes, and in particular those which enjoy good distance properties, such as the Reed–Solomon code [47].

At a high level, we make the following changes to the accumulation scheme blueprint. Let C be a linear code, and let δ be a constant which is smaller than the unique decoding radius of C. The accumulator witness is a codeword C(𝐰), and the corresponding instance 𝖺𝖼𝖼.x is a commitment to 𝖺𝖼𝖼.w. The accumulation verifier checks that the output accumulator is δ-close to a random linear combination of the input accumulators by running the linearity spot check. Later, the decider will check that 𝖺𝖼𝖼3.w is the encoding of a good vector, and that 𝖺𝖼𝖼3.x commits to 𝖺𝖼𝖼3.w.

Knowledge soundness

We would like our linearity spot check to satisfy the following knowledge soundness property. Suppose a (possibly malicious) prover outputs commitments 𝖼𝗆1, 𝖼𝗆2, and 𝖼𝗆3 which pass the check. Furthermore, suppose that 𝖼𝗆3 commits to a vector 𝐯3. Then an efficient extractor can find vectors 𝐯1 and 𝐯2 such that (a) α𝐯1+β𝐯2 is δ-close to 𝐯3; and (b) 𝖼𝗆1 and 𝖼𝗆2 commit to 𝐯1 and 𝐯2. Notice that if we can extract vectors that satisfy 2.1.1, then our previous analysis of the spot check implies 2.1.1. Extraction turns out to be fairly straightforward: if we use Merkle commitments with a random oracle as the hash function, then we can find 𝐯1 and 𝐯2 by observing the prover’s random oracle queries [52].888To be precise, we must also consider a malicious prover that does not commit to a full vector; see the full version of the paper.

Returning to accumulation, suppose that a (possibly malicious) prover outputs 𝖺𝖼𝖼1.x, 𝖺𝖼𝖼2.x, and 𝖺𝖼𝖼3 which pass the verifier’s and decider’s checks. Since the verifier runs the spot check, we can extract accumulator witnesses 𝖺𝖼𝖼1.w and 𝖺𝖼𝖼2.w such that α𝖺𝖼𝖼1.w+β𝖺𝖼𝖼2.w is δ-close to 𝖺𝖼𝖼3.w. Since the decider accepts 𝖺𝖼𝖼3.w, we know that 𝖺𝖼𝖼3.w is a codeword C(𝐰3). Similarly, we need 𝖺𝖼𝖼1.w and 𝖺𝖼𝖼2.w to be codewords in order for the decider to accept 𝖺𝖼𝖼1.w and 𝖺𝖼𝖼2.w. Unfortunately, this is simply not the case. For example, a cheating prover can always choose 𝖺𝖼𝖼1.w which agrees with a codeword at all but one location, and this will almost certainly go undetected.

Can we still say something meaningful about the extracted witnesses? We argue that intuitively, since α and β are (possibly correlated) random scalars, with high probability 𝖺𝖼𝖼1.w and 𝖺𝖼𝖼2.w are themselves δ-close to codewords. Moreover, 𝖺𝖼𝖼1.w and 𝖺𝖼𝖼2.w decode to 𝐰1 and 𝐰2 such that α𝐰1+β𝐰2=𝐰3. This intuition can be formally proven using a suitable “proximity gap” result [6], which exists for a variety of parameter regimes.

The upshot is that we extract accumulators 𝖺𝖼𝖼1 and 𝖺𝖼𝖼2 which are only accepted by a relaxed decider. Namely, given an accumulator 𝖺𝖼𝖼, this decider checks that 𝖺𝖼𝖼.x commits to 𝖺𝖼𝖼.w, and moreover that 𝖺𝖼𝖼.w is δ-close to the code.

Recursive extraction

The foregoing analysis suffices for a single step of accumulation. However, in order to construct PCD, we will have to recursively extract from old accumulators. It is straightforward to see that a recursively extracted accumulator is only guaranteed to be 2δ-close to the code, since we are extracting from an accumulator that may already be δ-far from the code. More generally, k steps of recursion will only guarantee accumulators that are kδ-close to the code. We will see that once kδ is larger than the unique decoding radius, extraction is no longer meaningful. In particular, this leads to the following concrete attack: a cheating prover can start with a bad codeword (rejected by the decider) and, over the k accumulation steps, incrementally move it to a good codeword (accepted by the decider). This motivates our notion of “bounded-depth” accumulation, which is not captured by existing definitions [21].

2.2 Defining bounded-depth accumulation

To describe our construction which only supports accumulation up to a certain (constant) depth, we introduce a new, relaxed knowledge soundness property; the key differences are highlighted in blue. We say that an accumulation scheme has bounded-depth knowledge soundness (with maximum depth d) if there exists a family of deciders {Ds}s=0d, where D is equivalent to D0, such that the following holds. If Ds1(𝖺𝖼𝖼)=1 and V(x,π.x,𝖺𝖼𝖼.x,𝖺𝖼𝖼.x)=1, then an efficient extractor can find π.w and 𝖺𝖼𝖼.w such that 𝒱(x,π)=1 and Ds(𝖺𝖼𝖼)=1.

This is a meaningful definition. In addition to generalizing standard knowledge soundness, which can be recovered by setting d= and using a single decider D, it captures our construction based on error-resilient linearity checks: Ds is the decider that only accepts if the accumulator is at most sδ-far from the code, and in particular D0 only accepts codewords. The depth bound d is the maximum number of recursive extractions that we can perform before dδ exceeds the unique decoding radius of the code.

2.3 Bounded-depth PCD from bounded-depth accumulation

Existing theorems that build PCD from accumulation [21, 13, 41] do not immediately translate to the bounded-depth setting. To see why, let us recall a simplified version of the construction from [21]. Suppose we have an accumulation scheme for a non-interactive argument of knowledge (NARK). Informally, a PCD proof for zi, which consists of a NARK proof πi and accumulator 𝖺𝖼𝖼i, certifies that zi=Fi(z0), where z0 is some initial value. We maintain the invariant that if πi and 𝖺𝖼𝖼i are valid, then the computation is correct up to the i-th step.

  • The PCD prover receives a proof (πi,𝖺𝖼𝖼i) for zi, and wants to output a proof for zi+1. First, it accumulates πi and 𝖺𝖼𝖼i into a new accumulator 𝖺𝖼𝖼i+1, generating an accumulation proof 𝗉𝖿i+1. Next, it generates a NARK proof πi+1 for the following claim, expressed as a circuit R (see Figure 1): “zi+1=F(zi), and there exists a NARK proof πi, old accumulator 𝖺𝖼𝖼i, and accumulation proof 𝗉𝖿i+1 which correctly accumulate into 𝖺𝖼𝖼i+1.” The proof for zi+1 is (πi+1,𝖺𝖼𝖼i+1).

  • The PCD verifier checks a proof (πi,𝖺𝖼𝖼i) for zi by running the NARK verifier on πi and the decider on 𝖺𝖼𝖼i.

R(x=(zi+1,𝖺𝖼𝖼i+1.x),w=(zi,πi.x,𝖺𝖼𝖼i.x,𝗉𝖿i+1)):

  1. 1.

    Check that zi+1=F(zi).

  2. 2.

    Set xi=(zi,𝖺𝖼𝖼i.x).

  3. 3.

    Check that V(xi,πi,𝖺𝖼𝖼i.x,𝖺𝖼𝖼i+1.x,𝗉𝖿i+1)=1.

Figure 1: Recursion circuit for PCD.

Now suppose we replace the accumulation scheme with one that only has bounded-depth knowledge soundness. The construction remains the same, but we must provide a new soundness analysis.

PCD knowledge soundness

We need to construct an extractor which, given an accepting proof (πT,𝖺𝖼𝖼T) for zT, extracts a sequence of values z0,,zT such that zi+1=F(zi) for all i. [21] gives the following strategy, which interleaves the NARK extractor and accumulation extractor. Suppose we have zi+1, πi+1, and 𝖺𝖼𝖼i+1. First, we invoke the NARK extractor to obtain (zi,πi.x,𝖺𝖼𝖼i.x,𝗉𝖿i+1). Second, we invoke the accumulation extractor to obtain (πi.w,𝖺𝖼𝖼i.w). This gives us πi and 𝖺𝖼𝖼i, and the process continues. We maintain the invariant that in the i-th step, πi and 𝖺𝖼𝖼i are valid.

With bounded-depth accumulation, we need to maintain a slightly weaker invariant: in the i-th step, instead of requiring that 𝖺𝖼𝖼i is accepted by the strict decider D, we only require that it is accepted by the i-th relaxed decider Di. This discussion only provides a high-level overview of the proof strategy, and only describes an IVC construction; we describe the full PCD construction that supports arbitrary (bounded-depth) computation graphs, along with a full soundness analysis, in the full version of the paper.

 Remark 2 (bounded-depth PCD suffices).

As presented, our PCD scheme supports up to d steps of computation, where d is the maximum depth of the accumulation scheme. We call this bounded-depth PCD. Since d will realistically be a small constant, this seems to be of limited use: most computations require more than a constant number of steps! Fortunately, even such a limited PCD scheme can be used to construct IVC for any polynomial-length computation [10]. The idea is for the PCD prover to receive multiple proofs in each step, yielding a computation tree. In particular, if we can accumulate m inputs and m accumulators in a single step, then our PCD scheme can support computation trees of size md. Setting m=λ and d=O(1) allows us to support polynomial-size computations.

 Remark 3 (bounded-depth vs. constant-depth PCD).

Perhaps surprisingly, even with standard (unbounded) accumulation, [21] is only able to construct constant-depth PCD.999This is slightly different from bounded-depth PCD, where the maximum depth must be fixed in advance. This is because the size of the PCD extractor grows exponentially in the computation’s depth, regardless of the accumulation scheme’s knowledge soundness property. We remark that this limitation is largely theoretical: there is no known attack which exploits unbounded recursive proof composition. In contrast, the depth bound in bounded-depth PCD is not merely an artifact of the analysis: there exists a concrete attack that can be mounted against our construction when the depth exceeds d. This means that the tree-based strategy described in Remark 2 is necessary for real-world implementations, unlike in prior work.

2.4 Constructing bounded-depth accumulation

Our starting point is the ProtoStar and ProtoGalaxy accumulation schemes [20, 34]. They support a general class of non-interactive arguments where consists of the NP witness along with a short vector commitment to the witness, and whose verifier performs three steps: (a) computing Fiat–Shamir challenges; (b) checking vector commitment openings; and (c) evaluating a polynomial over the instance, challenges, and openings. The key insight of ProtoStar and ProtoGalaxy is that the accumulation verifier can cheaply perform the first step, while batching the remaining steps and deferring it to the decider.

Let us recall ProtoGalaxy’s strategy [34], focusing on the case where we are trying to accumulate claims about satisfaction of R1CS, a popular NP language in the succinct argument literature. Recall that an R1CS instance consists of constraint matrices A,B,CFN×(+n), and is said to be satisfied by a public input xF and witness wFn if A(x||w)B(x||w)C(x||w)=0N. In ProtoGalaxy, a NARK proof for an R1CS instance with public input x would consist of the proof witness, which is just the R1CS witness w, and the proof instance, which is just a vector commitment to w. The polynomial p evaluated by the verifier is of the form p(x,w)=A(x||w)B(x||w)C(x||w). Then, to accumulate R1CS claims, ProtoGalaxy’s accumulator witness 𝖺𝖼𝖼.w is also a vector wFn, and its accumulator instance 𝖺𝖼𝖼.x consists of a vector xF, a (homomorphic) vector commitment to w, and an error term eF. Thus, a NARK proof verification claim can be seen as a special case of the accumulator verification case where the error term is zero.

To accumulate m such (accumulation or NARK) verification claims for accumulators 𝖺𝖼𝖼1,,𝖺𝖼𝖼m, ProtoGalaxy relies on the following univariate polynomial identity:

p(i=1mLi(X)xi,i=1mLi(X)wi)=i=1mLi(X)eimodvH(X),

where Li is the i-th Lagrange polynomial of some m-sized set HF, and vH(X) is the vanishing polynomial on H. For some quotient polynomial q, this claim can be rewritten as

p(i=1mLi(X)xi,i=1mLi(X)wi)=i=1mLi(X)ei+q(X)vH(X),

and this formulation allows the verifier to check the identity at a random point αF if the prover sends q.

This allows us to accumulate the m input claims as follows. The prover constructs a new accumulator 𝖺𝖼𝖼 whose new instance is x:=iLi(α)xi, new witness vector is w:=iLi(α)wi, and the new error is e:=v(α)q(α)+iLi(α)ei. The verifier checks that 𝖺𝖼𝖼 was computed correctly by homomorphically computing the new vector commitment, and directly computing the new instance and new error term. Finally, the accumulation decider completes the checks by asserting that that p(x,w)=e.101010Technically speaking, this check, as stated, is ill-formed, as the output of p is an N-sized vector, while e is a single field element. However, this can be fixed by taking a random linear combinations of the output vector. ProtoStar and ProtoGalaxy show how to represent the latter task also as a low-degree polynomial evaluation.

Our construction

We follow the same strategy, except we replace homomorphic computations with error-resilient linearity checks. Concretely, we make the following changes. An accumulator witness is a codeword fC, and an accumulator instance consists of a vector commitment to f and an error term eF. The decider accepts an accumulator if f is the encoding of a vector wFn such that p(x,w)=e. Finally, as discussed in Section 2.1, the verifier uses a linearity check to ensure that the new accumulator 𝖺𝖼𝖼 is sufficiently consistent with the old accumulators 𝖺𝖼𝖼1,,𝖺𝖼𝖼m.

Overall, our construction inherits many desirable properties from ProtoStar [20] and ProtoGalaxy [34], including support for arbitrary arity m=poly(λ), which is crucial for constructing PCD (see Remark 2), and efficient support for custom gates.

Extension to arbitrary linear codes

A key parameter in our construction is the linear code. We use Reed–Solomon codes because they display a proximity gap when the coefficients are Lagrange evaluations L1(α),,Lm(α). However, a modified version of our construction extends to arbitrary linear codes. This is advantageous as these codes can have asymptotically faster encoding time compared to Reed–Solomon codes [37]. The construction can work with any proximity gap (e.g., uniformly random coefficients). The key idea is to commit to two codes within the accumulator, one for proximity checking and the other for the algebraic check using p(X). See Section 4.3 for details.

Efficiency

The cost of the accumulation verifier is dominated by that of the linearity checker. Recall that to achieve negligible knowledge soundness error at depth d, the latter checks that two code words are dδ close via k=O(dλ) spot checks, where δ is such that dδ is less than the unique decoding radius of the code. Overall, when instantiated with the Merkle tree-based vector commitment, the accumulation verifier checks O(dλ) Merkle tree paths. When applying this construction to the PCD scheme in Section 2.3, the latter cost becomes the recursive overhead of the PCD prover. As noted in Table 1, this cost is asymptotically better than the O(lognλ) cost of prior SNARK-based PCD schemes [26].

A keen reader may notice that a disadvantage of our construction is that recursive overhead scales with the depth of the PCD computation graph. We now present several optimizations that reduce the depth of the PCD tree and significantly improve the efficiency of the resulting PCD scheme in practice.

2.5 Optimization: batch commitments

Recall from Remarks 2 and 3 that for our PCD construction, reducing the depth of the computation graph is essential for achieving provable security guarantees, and the standard depth-reduction technique for the case of IVC [10] works by constructing a PCD tree whose leaves comprise the actual computation being performed. To achieve constant computation depth, Bitansky et al. [10] set the arity m of this tree to be super-constant (i.e., m=λ). The per-node recursive overhead of our PCD construction in this setting is the cost of performing linearity checks on m codewords of size n, which costs O(mλlogn) hashes when using Merkle trees. We now describe how to reduce this to just O(λ(logn+m)) hashes.

Recall that the linearity checker opens all m codewords at the same locations. This means that for each spot-check, each of the m Merkle trees is opened at the same leaf. We take advantage of this repetitive structure by committing to all m codewords using a single Merkle tree whose i-th leaf is the concatenation of the i-th symbols of the codewords. Each spot-check now requires opening only a single Merkle tree path (and checking the leaf hash), leading to a cost of O(λ(logn+m)) hashes for O(λ) spot checks, as required.

However, this modification comes at a cost: it requires us to commit to all codewords together at the same time. While this is straightforward at the leaf layer, it gets more complex at higher layers. For example, even committing to a new accumulator now requires waiting for the batch of “sibling” new accumulators, which in turn means that we must wait for m accumulations (each of size m) to complete before we can compute the commitments to the m new accumulators. Overall, across the entire tree, this requires the PCD prover to maintain a state of O(m2) “pending” accumulators. We provide more details in the full version of the paper.

2.6 Optimization: low-overhead IVC from accumulation

We give a generic optimization which improves on the PCD-to-IVC compiler of Bitansky et al. [52, 10]. They construct a (polynomial-length) IVC scheme for a function F by using a (constant-depth) PCD scheme for a related function F. In particular, F computes F (at leaf nodes) and performs consistency checks (at internal nodes). This is wasteful: even though we do not need to prove anything about F at internal nodes, the PCD prover still generates a proof for F (which is dominated by F).

We improve this compiler by constructing an IVC scheme with minimal overhead. The core idea is that we first construct a tree of accumulators for F, i.e., a tree the leaf nodes are proofs for F, and each parent accumulates its children. Then, we construct a PCD tree which proves that the accumulation tree was constructed correctly. When the PCD scheme is instantiated with our accumulation-based construction, the PCD circuit now checks two accumulation verifiers: one that checks the correctness of the accumulation tree, and one that helps check the correctness of the PCD tree.

Accumulating separately means that we no longer have to generate NARK proofs for F at internal nodes. Additionally, because we only need to show that internal nodes of the accumulation tree were constructed correctly, our PCD tree has one fewer layer than before. This further reduces cost, in particular for higher arities (as m grows, the leaf layer of a tree contains a higher fraction of nodes). We provide more details in the full version of this paper.

2.7 Optimization: PCD composition

The recursive overhead of our PCD scheme grows linearly with the maximum supported depth. This is contrast with SNARK-based PCD schemes like Fractal [26], which do not suffer from such an efficiency loss. However, these schemes pay a higher per-step cost anyway, and are thus asymptotically less efficient than our PCD scheme for low recursion depths.

We provide a generic optimization to combine SNARK-based PCD schemes with our PCD scheme to achieve a scheme that achieves better efficiency than either scheme alone. The core idea is to first use our accumulation-based PCD up to some depth d1, and then prove the PCD verifier for the latter with a SNARK-based PCD scheme on top. When invoked with tree PCD, this means that the SNARK-based PCD scheme is invoked only every md1 steps. By choosing d1 appropriately, the resulting scheme achieves better efficiency than either scheme alone. Furthermore, the resulting scheme supports arbitrary constant depth, as opposed to our accumulation-based scheme, which only supports an a priori fixed depth. We provide more details in the full version of this paper.

3 Bounded-depth accumulation

Let 𝖠𝖱𝖦=(𝒢,𝒫,𝒱) be a non-interactive argument where knowledge soundness (but not necessarily completeness) additionally holds for a “relaxed” verifier 𝒱~. Suppose proofs can be split into two parts, i.e., π=(π.x,π.w). Let 𝗊𝗑 denote (x,π.x), where x is an instance of the relation; call this the verifier input instance. Let 𝗊𝗐 denote π.w; call this the verifier input witness. We write 𝒱(𝗉𝗉,i,𝗊𝗑i,𝗊𝗐i) as shorthand for the verifier running on x and π. We write 𝖺𝖼𝖼 as shorthand for the tuple (𝖺𝖼𝖼.x,𝖺𝖼𝖼.w). An accumulation scheme in the random oracle model for 𝖠𝖱𝖦 is a tuple of algorithms 𝖠𝖲=(G,I,P,V,D) with the following syntax and properties.

  • G(1λ)𝗉𝗉𝖠𝖲: On input a security parameter λ, the generator G samples and outputs accumulation parameters 𝗉𝗉𝖠𝖲.

  • I(𝗉𝗉𝖠𝖲,𝗉𝗉,i)(𝖺𝗉𝗄,𝖺𝗏𝗄,𝖽𝗄): On input accumulation parameters 𝗉𝗉𝖠𝖲 and argument parameters 𝗉𝗉, the indexer I deterministically computes and outputs a proving key 𝖺𝗉𝗄, verification key 𝖺𝗏𝗄, and decision key 𝖽𝗄.

  • P(𝖺𝗉𝗄,[𝗊𝗑i,𝗊𝗐i]i=1m1,[𝖺𝖼𝖼i]i=1m2)(𝖺𝖼𝖼,𝗉𝖿): On input the proving key 𝖺𝗉𝗄, verifier inputs [𝗊𝗑i,𝗊𝗐i]i=1m1, and accumulators [𝖺𝖼𝖼i]i=1m2, the accumulation prover P outputs a new accumulator 𝖺𝖼𝖼 and proof 𝗉𝖿. Here, m1 and m2 are fixed arities which may be functions of λ.

  • V(𝖺𝗏𝗄,[𝗊𝗑i]i=1m1,[𝖺𝖼𝖼.xi]i=1m2,𝖺𝖼𝖼.x,𝗉𝖿){0,1}: On input the verifying key 𝖺𝗏𝗄, verifier input instances [𝗊𝗑i]i=1m1, accumulator instances [𝖺𝖼𝖼.xi]i=1m2, new accumulator instance 𝖺𝖼𝖼.x, and proof 𝗉𝖿, the accumulation verifier V outputs a bit indicating whether 𝖺𝖼𝖼.x correctly accumulates the other instances.

  • D(𝖽𝗄,𝖺𝖼𝖼){0,1}: On input the decision key 𝖽𝗄 and accumulator 𝖺𝖼𝖼, the decider outputs a bit indicating whether 𝖺𝖼𝖼 is a valid accumulator.

Completeness

For every (unbounded) adversary 𝒜,

Pr[i[m1],𝒱(𝗉𝗉,i,𝗊𝗑i,𝗊𝗐i)=1i[m2],D(𝖽𝗄,𝖺𝖼𝖼i)=1V(𝖺𝗏𝗄,[𝗊𝗑i]i=1m1,[𝖺𝖼𝖼i.x]i=1m2,𝖺𝖼𝖼.x,𝗉𝖿)=1D(𝖽𝗄,𝖺𝖼𝖼)=1|ρ𝒰(λ)𝗉𝗉𝖠𝖲G(1λ)𝗉𝗉𝒢(1λ)(i,[𝗊𝗑i,𝗊𝗐i]i=1m1,[𝖺𝖼𝖼i]i=1m2)𝒜(𝗉𝗉𝖠𝖲,𝗉𝗉)(𝖺𝗉𝗄,𝖺𝗏𝗄,𝖽𝗄)I(𝗉𝗉𝖠𝖲,𝗉𝗉,i)(𝖺𝖼𝖼,𝗉𝖿)P(𝖺𝗉𝗄,[𝗊𝗑i,𝗊𝗐i]i=1m1,[𝖺𝖼𝖼i]i=1m2)]=1.

To bootstrap accumulation, we also assume the prover can efficiently construct a dummy accumulator and proof, denoted 𝖺𝖼𝖼=P(𝖺𝗉𝗄,), which the decider accepts.

Knowledge soundness

We say that 𝖠𝖲 has bounded-depth knowledge soundness (with maximum depth ds) if there exists a family of deciders {Ds}s=0ds, where D is equivalent to D0, such that the following holds. There exists an expected polynomial time extractor E such that for every depth parameter s[ds], expected polynomial time adversary P~, and auxiliary input distribution 𝒟, the following probability is negligibly close to 1:

Pr[Vρ(𝖺𝗏𝗄,[𝗊𝗑i]i=1m1,[𝖺𝖼𝖼i.x]i=1m2,𝖺𝖼𝖼.x,𝗉𝖿)=1Ds1(𝖽𝗄,𝖺𝖼𝖼)=1i[m1],𝒱~(𝗉𝗉,i,𝗊𝗑i,𝗊𝗐i)=1i[m2],Ds(𝖽𝗄,𝖺𝖼𝖼i)=1|ρ𝒰(λ)𝗉𝗉𝖠𝖲G(1λ)𝗉𝗉𝒢(1λ)𝖺𝗂𝒟(1λ)(i,[𝗊𝗑i]i=1m1,[𝖺𝖼𝖼i.x]i=1m2,𝖺𝖼𝖼,𝗉𝖿;r)P~(𝗉𝗉𝖠𝖲,𝗉𝗉,𝖺𝗂)([𝗊𝗐i]i=1m1,[𝖺𝖼𝖼i.w]i=1m2)EP~(𝗉𝗉𝖠𝖲,𝗉𝗉,𝖺𝗂,r)(𝖺𝗉𝗄,𝖺𝗏𝗄,𝖽𝗄)I(𝗉𝗉𝖠𝖲,𝗉𝗉,i)]

The extractor implicitly receives s as input.

 Remark 4.

We can also define a bounded-depth version of completeness with a separate family of deciders. In the completeness definition, valid accumulators for the i-th decider accumulate to a valid accumulator for the (i+1)-th decider. This is important in settings where even an honest prover can only perform a bounded number of accumulations.

4 Constructing bounded-depth accumulation

We construct a bounded-depth accumulation scheme, which supports (possibly distinct) arities m1=poly(λ) and m2=poly(λ), for a general class of non-interactive arguments. Across all of our constructions, we fix the following global constants: depth bound ds, code rate R(0,1), and distance δ(1R)/(2ds); this guarantees that dsδ is smaller than the unique decoding radius of a Reed–Solomon code with rate R. Additionally, we use domain separation on the random oracle ρ to model three disjoint oracles: ρH for Merkle trees and hashing, ρ𝖠𝖱𝖦 for the argument verifier’s randomness, and ρ𝖠𝖲 for the accumulation verifier’s randomness. When querying ρ𝖠𝖱𝖦 and ρ𝖠𝖲, we assume the random oracle’s output is used to sample from the verifier’s challenge set. All security proofs are deferred to the full version of the paper.

Notation

If x,y,z are vectors, we will often interpret tuples like (x,(y,z)) as a vector consisting of x,y,z concatenated together. Given a codeword fC, let f^ denote the corresponding message which encodes to f. We write 𝐟 as shorthand for the tuple (f(1),,f(μ)) and 𝐂 as shorthand for the Cartesian product C(1)××C(μ). Similarly, let 𝐟^=(f^(1),,f^(μ)) and Δ(𝐟,𝐠)=maxj[μ]Δ(f(j),g(j)). When querying locations in a codeword, let 𝐐=(𝒬(1),,𝒬(μ)) and 𝐟[𝐐]=(f(1)[𝒬(1)],,f(μ)[𝒬(μ)]). We use arrow notation as shorthand for tuples of commitment data, e.g., 𝖼𝗆=(𝖼𝗆(1),,𝖼𝗆(μ)). Vector commitment functions map over tuples, e.g., (𝖼𝗆,𝖺𝗎𝗑)𝖵𝖢.𝖢𝗈𝗆𝗆𝗂𝗍(𝗏𝗉,𝐟) should be interpreted as saying “for each j[μ], let (𝖼𝗆(j),𝖺𝗎𝗑(j))𝖵𝖢.𝖢𝗈𝗆𝗆𝗂𝗍(𝗏𝗉,f(j)).”

4.1 Non-interactive argument

Our starting point is any special-sound interactive proof with an algebraic verifier. By this, we mean that the verifier’s check can be expressed as a sequence of degree d polynomials (derived from the index) which take the transcript as input. The verifier accepts if all of the polynomials evaluate to zero.

In more detail, we require an interactive proof for some indexed relation (F). Let μ be the number of rounds in the protocol; this may be a function of the index. For simplicity, we assume that the instance x, the prover’s messages m(1),,m(μ), and the verifier’s challenges r(1),,r(μ) are all vectors over F; their lengths may be a function of the index. From the index, the verifier derives degree d polynomials p1,,p over F. It accepts if, for all i, pi(x,𝐫,𝐦)=0.

Compressing the verifier

Without loss of generality, we can assume that the algebraic verifier consists of a single polynomial. This is because multiple polynomial checks can be compressed into a single check, e.g., by using the following technique due to [34, 20].

Let Π be an interactive proof where the verifier’s check consists of polynomials p0,,p1, each of degree d. We transform this into an interactive proof 𝖢𝖵[Π] for the same relation, where the verifier’s check is a single polynomial

p(X,Y)=i=01𝗉𝗈𝗐i(Y)pi(X).

Here, 𝗉𝗈𝗐i is the unique degree log polynomial satisfying 𝗉𝗈𝗐i(1,β,β2,β4,,β/2)=βi for all βF. It follows that p is of degree d+log. The interactive protocol is the same as before, except that the verifier additionally samples a challenge y=(1,β,β2,β4,,β/2) where βF. The verifier accepts if p(x,y)=0, where x is the transcript from the original proof.

If Π is (k1,,kμ)-special-sound, then 𝖢𝖵[Π] is (k1,,kμ,)-special-sound. To see why, suppose we have a tree T of accepting transcripts for 𝖢𝖵[Π]. Consider an arbitrary node in the penultimate layer of the tree. Its children correspond with transcripts of the form (x,y1),,(x,y), each yi distinct. Since the transcripts are accepting, the degree 1 univariate polynomial i=01Zipi(x) is zero at points. It follows that f is the zero polynomial and, for all i, pi(x)=0. Let T denote T with its bottom layer removed. We have shown that T is a tree of accepting transcripts for Π. Hence, we construct an extractor for 𝖢𝖵[Π] by running the extractor for Π on T.

𝒢(1λ):

  1. 1.

    Choose a suitable field F,log|F|=Ω(λ).

  2. 2.

    Let 𝗏𝗉𝖵𝖢.𝖲𝖾𝗍𝗎𝗉(1λ,F).

  3. 3.

    Output 𝗉𝗉=(F,𝗏𝗉).

ρ(𝗉𝗉=(F,𝗏𝗉),i):

  1. 1.

    Query τρH(i).

  2. 2.

    From F and i, derive the following parameters, collected into p:

    • The number of rounds, denoted μ.

    • The length of the instance.

    • For each j[μ], the length, denoted (j), of the prover’s j-th message.

    • For each j[μ], the format of the verifier’s j-th challenge.

  3. 3.

    From F and i, derive the verifier’s check p.

  4. 4.

    For each j[μ], let C(j) be a Reed–Solomon code over F with dimension (j), rate R, and evaluation domain L(j).

  5. 5.

    Output ipk=(F,𝗏𝗉,i,τ,p,𝐂) and ivk=(F,𝗏𝗉,τ,p,p,𝐂).

𝒫ρ(𝗉𝗉,i,x,w):

  1. 1.

    Compute the proving key ipk=(F,𝗏𝗉,i,τ,p,𝐂) according to ρ(𝗉𝗉,i).

  2. 2.

    For j[μ]:

    • Compute the prover’s j-th message m(j)P(F,i,x,w,[m(i),r(i)]i=1j1).

    • Encode m(j) to f(j)C(j).

    • Let (𝖼𝗆(j),𝖺𝗎𝗑(j))𝖵𝖢.𝖢𝗈𝗆𝗆𝗂𝗍ρH(𝗏𝗉,f(j)).

    • Query r(j)ρ𝖠𝖱𝖦(τ,x,𝖼𝗆(1),,𝖼𝗆(j)).

  3. 3.

    Output π=(𝖼𝗆,𝖺𝗎𝗑).

𝒱ρ(𝗉𝗉,i,x,π=(𝖼𝗆,𝖺𝗎𝗑)):

  1. 1.

    Compute the verification key ivk=(F,𝗏𝗉,τ,p,p,𝐂) according to ρ(𝗉𝗉,i).

  2. 2.

    For each j[μ], query r(j)ρ𝖠𝖱𝖦(τ,x,𝖼𝗆(1),,𝖼𝗆(j)).

  3. 3.

    Let 𝐟𝖵𝖢.𝖠𝗇𝗌𝗐𝖾𝗋ρH(𝗏𝗉,𝖼𝗆,𝖺𝗎𝗑).

  4. 4.

    Verify that 𝐟𝐂.

  5. 5.

    Accept if p(x,𝐫,𝐟^)=0.

𝒱~ρ(𝗉𝗉,i,x,π=(𝖼𝗆,𝖺𝗎𝗑)):

  1. 1.

    Compute the verification key ivk=(F,𝗏𝗉,τ,p,p,𝐂) according to ρ(𝗉𝗉,i).

  2. 2.

    For each j[μ], query r(j)ρ𝖠𝖱𝖦(τ,x,𝖼𝗆(1),,𝖼𝗆(j)).

  3. 3.

    Let 𝐟𝖵𝖢.𝖠𝗇𝗌𝗐𝖾𝗋ρH(𝗏𝗉,𝖼𝗆,𝖺𝗎𝗑).

  4. 4.

    Find 𝐠𝐂 by uniquely decoding 𝐟. If no such codeword exists, reject.

  5. 5.

    Accept if p(x,𝐫,𝐠^)=0.

Figure 2: Non-interactive argument algorithms.
Committing to messages

In order to achieve efficient accumulation, we will instead have the prover send commitments to messages. Only in the final move of the protocol does the prover send openings to all of the commitments. Strictly speaking, this is only special-sound for an “augmented relation”; namely, there exists an extractor which, given a tree of accepting transcripts, outputs either a witness or a break of the commitment scheme. Nonetheless, assuming the scheme is computationally binding, applying the Fiat-Shamir transformation yields a non-interactive argument of knowledge for the original relation. We refer to [20] for a more detailed analysis.

Removing interaction

Given a special-sound interactive proof (P,V), we apply the Fiat–Shamir transformation (with commitments) to get a non-interactive argument of knowledge [4]. In order to achieve efficient accumulation, we use a standard variant of the transformation where the index is first hashed to a succinct value τ. The Fiat–Shamir transform outputs a non-interactive argument 𝖠𝖱𝖦=(𝒢,𝒫,𝒱), shown in Figure 2, for the indexed relation family {𝗉𝗉=(F)}. We also define an indexer as a helper algorithm.

Attema et al. [4] prove that multi-round public-coin protocols can be compiled into non-interactive arguments. However, their definition of knowledge-soundness is slightly different from ours: they require that the extractor succeeds with non-negligible probability if the adversary succeeds with non-negligible probability. Our definition, on the other hand, requires that the extractor succeeds with all but negligible probability whenever the adversary succeeds. However, these definitions are equivalent as one can boost the extractor’s success probability by running it until it succeeds. This works as the extractor is only required to run in expected polynomial time.

Relaxed verifier

We use a specific commitment scheme to support accumulation: the prover sends a vector commitment to a codeword of the message, along with the full auxiliary data. We relax the verifier in two different ways to get 𝒱~ (also shown in Figure 2). First, we allow it to decode noisy codewords. Second, we allow it to accept partial openings to the vector commitment where the missing positions are interpreted as erasure errors. Because the verifier only accepts when the partial opening is decodable, the prover is still bound to a unique (decoded) vector; hence, knowledge soundness is not affected.

4.2 Accumulation scheme

Fix a subset HF of size m=m1+m2; this may be a function of λ. We construct an accumulation scheme 𝖠𝖲=(G,I,P,V,D) for 𝖠𝖱𝖦. The argument verifier inputs are split into 𝗊𝗑=(x,𝖼𝗆) and 𝗊𝗐=𝖺𝗎𝗑. Accumulators have a nearly identical structure; in fact, any verifier input (𝗊𝗑,𝗊𝗐) can be converted into an accumulator 𝖺𝖼𝖼 by setting 𝖺𝖼𝖼.w=𝗊𝗐 and 𝖺𝖼𝖼.x=CastInputρ(τ,𝗊𝗑), which is defined below.

CastInputρ(τ,𝗊𝗑=(x,𝖼𝗆)):

  1. 1.

    For each j[μ], query r(j)ρ𝖠𝖱𝖦(τ,x,𝖼𝗆(1),,𝖼𝗆(j)).

  2. 2.

    Collect x and 𝐫 into a vector x.

  3. 3.

    Output 𝖺𝖼𝖼.x=(0,x,𝖼𝗆).

We also define a helper function which casts [𝗊𝗑i]i=1m1 and [𝖺𝖼𝖼i.x]i=1m2 into a single list of m=m1+m2 accumulator instances.

Castρ(τ,[𝗊𝗑i]i=1m1,[𝖺𝖼𝖼i.x]i=1m2):

  1. 1.

    Output [CastInput(τ,𝗊𝗑1),,CastInput(τ,𝗊𝗑m1),𝖺𝖼𝖼1.x,,𝖺𝖼𝖼m2.x].

Finally, we define helper functions which perform the bulk of proving and verification.

Proveρ(𝖺𝗉𝗄,[ei,xi,𝖼𝗆i,𝖺𝗎𝗑i]i=1m):

  1. 1.

    For each i[m], let 𝐟i𝖵𝖢.𝖠𝗇𝗌𝗐𝖾𝗋ρH(𝗏𝗉,𝖼𝗆i,𝖺𝗎𝗑i).

  2. 2.

    Compute the univariate polynomial qF[X] of degree at most d(m1)m such that

    p(i=1mLi,H(X)(xi,𝐟^i))=vH(X)q(X)+i=1mLi,H(X)ei.
  3. 3.

    Query αρ𝖠𝖲(τ,[𝖺𝖼𝖼i.x]i=1m,q), where αF is a uniformly sampled field element.

  4. 4.

    Compute e=v(α)q(α)+iLi(α)ei.

  5. 5.

    Compute (x,𝐟^)=iLi(α)(xi,𝐟^i).

  6. 6.

    Let (𝖼𝗆,𝖺𝗎𝗑)𝖵𝖢.𝖢𝗈𝗆𝗆𝗂𝗍ρH(𝗏𝗉,𝐟).

  7. 7.

    Set 𝖺𝖼𝖼.x=(e,x,𝖼𝗆) and 𝖺𝖼𝖼.w=𝖺𝗎𝗑.

  8. 8.

    Query 𝐐ρ𝖠𝖲(τ,[𝖺𝖼𝖼i.x]i=1m,q,𝖺𝖼𝖼.x), where 𝒬(j) is a uniformly sampled t-sized subset of L(j).

  9. 9.

    For each i[m], let 𝗈𝗉i𝖵𝖢.𝖮𝗉𝖾𝗇ρH(𝗏𝗉,𝖺𝗎𝗑i,𝐐).

  10. 10.

    Let 𝗈𝗉𝖵𝖢.𝖮𝗉𝖾𝗇ρH(𝗏𝗉,𝖺𝗎𝗑,𝐐).

  11. 11.

    Output 𝖺𝖼𝖼 and 𝗉𝖿=(q,[𝗈𝗉i]i=1m,𝗈𝗉).

Verifyρ(𝖺𝗏𝗄,[ei,xi,𝖼𝗆i]i=1m,(e,x,𝖼𝗆),(q,[𝗈𝗉i]i=1m,𝗈𝗉)):

  1. 1.

    Query αρ𝖠𝖲(τ,[𝖺𝖼𝖼i.x]i=1m,q).

  2. 2.

    Query 𝐐ρ𝖠𝖲(τ,[𝖺𝖼𝖼i.x]i=1m,q,𝖺𝖼𝖼.x).

  3. 3.

    For each i[m], let 𝐯i=𝖵𝖢.𝖠𝗇𝗌𝗐𝖾𝗋ρH(𝗏𝗉,𝖼𝗆i,𝗈𝗉i). If 𝐯i[𝐐] contains , reject.

  4. 4.

    Let 𝐯=𝖵𝖢.𝖠𝗇𝗌𝗐𝖾𝗋ρH(𝗏𝗉,𝖼𝗆,𝗈𝗉,𝐐). If 𝐯[𝐐] contains , reject.

  5. 5.

    Verify that e=vH(α)q(α)+i=1mLi,H(α)ei.

  6. 6.

    Verify that (x,𝐯)=i=1mLi,H(α)(xi,𝐯i).

See Figure 3 for a full description of 𝖠𝖲, along with the family of deciders.

G(1λ):

  1. 1.

    Choose a suitable spot check parameter t=Ω(λ).

  2. 2.

    Output 𝗉𝗉𝖠𝖲=t.

Iρ(𝗉𝗉𝖠𝖲=t,𝗉𝗉=(F,𝗏𝗉),i):

  1. 1.

    Obtain τ,p,p,𝐂 from ρ(𝗉𝗉,i).

  2. 2.

    Output 𝖺𝗉𝗄=(F,𝗏𝗉,t,τ,p,p,𝐂), 𝖺𝗏𝗄=(F,𝗏𝗉,t,τ,p,𝐂), and 𝖽𝗄=(F,𝗏𝗉,p,p,𝐂).

Pρ(𝖺𝗉𝗄,[𝗊𝗑i,𝗊𝗐i]i=1m1,[𝖺𝖼𝖼]i=1m2):

  1. 1.

    Let [ei,xi,𝖼𝗆i]i=1mCast(τ,[𝗊𝗑i]i=1m1,[𝖺𝖼𝖼i.x]i=1m2).

  2. 2.

    Parse [𝖺𝗎𝗑i]i=1m=[𝗊𝗐1,,𝗊𝗐m1,𝖺𝖼𝖼1.w,,𝖺𝖼𝖼m2.w].

  3. 3.

    Output Proveρ(𝖺𝗉𝗄,[ei,xi,𝖼𝗆i,𝖺𝗎𝗑i]i=1m).

Vρ(𝖺𝗏𝗄,[𝗊𝗑i]i=1m1,[𝖺𝖼𝖼i.x]i=1m2,𝖺𝖼𝖼.x,𝗉𝖿):

  1. 1.

    Let [ei,xi,𝖼𝗆i]i=1mCast(τ,[𝗊𝗑i]i=1m1,[𝖺𝖼𝖼i.x]i=1m2).

  2. 2.

    Accept if Verify(𝖺𝗏𝗄,[ei,xi,𝖼𝗆i]i=1m,𝖺𝖼𝖼.x,𝗉𝖿) accepts.

Dsρ(𝖽𝗄,𝖺𝖼𝖼.x=(e,x,𝖼𝗆),𝖺𝖼𝖼.w=𝖺𝗎𝗑):

  1. 1.

    Let 𝐟𝖵𝖢.𝖠𝗇𝗌𝗐𝖾𝗋ρH(𝗏𝗉,𝖼𝗆,𝖺𝗎𝗑).

  2. 2.

    Find 𝐠𝐂,Δ(𝐟,𝐠)sδ by decoding 𝐟. If no such codeword exists, reject.

  3. 3.

    Accept if p(x,𝐠^)=e.

Figure 3: Accumulation scheme algorithms.

4.3 Using arbitrary linear codes

At first glance, our accumulation scheme does not use any special properties of the Reed–Solomon code. Because other codes might have desirable properties, e.g., linear-time encoding, this motivates the following question: can we instantiate it with an arbitrary linear code, so long as it has good distance? Recall that the accumulator’s vectors are a random linear combination of the input vectors. In particular, the coefficients are Lagrange evaluations of a random field element; this is necessary for compressing the polynomial evaluation claims. Unfortunately, existing proximity gaps for arbitrary linear codes [49, 1, 32] do not support this specific distribution of coefficients.

Separating the proximity claim

In some sense, our construction accumulates two distinct claims for the same vector: a polynomial evaluation claim, and a proximity claim. To overcome the foregoing issue, we modify our accumulation scheme to separate these claims. Specifically, each accumulator now holds two codewords: the first is a linear combination using Lagrange coefficients, and the second is a linear combination using proximity gap coefficients. Accordingly, the accumulation verifier uses the first codeword to compress evaluation claims, and the second codeword to maintain proximity. This will be roughly twice as expensive as before.

Definition 5 (Proximity gaps for linear codes).

Let C be a linear code with relative distance d and blocklength n. We will treat elements of C as functions v:[n]F. Let 𝖦𝖾𝗇 be a function that takes in randomness 𝗋𝗇𝖽 and outputs coefficients r1,,rmF. We say that C has a proximity gap with respect to distribution 𝖦𝖾𝗇, error 𝖾𝗋𝗋, and proximity bound 𝖡 if the following holds.

For any δ𝖡 and arbitrary functions u1,,umFn:[n]F, if

Pr𝗋𝗇𝖽[Δ(i=1mriui,C)δ:(r1,,rm)𝖦𝖾𝗇(𝗋𝗇𝖽)]>𝖾𝗋𝗋(m),

then for all (r1,,rm) in the support of the distribution 𝖦𝖾𝗇(𝗋𝗇𝖽), we have

Δ(i=1mriui,C)δ.

Furthermore, there exists v1,,vmC such that for all (r1,,rm) in the support,

Δ(i=1mriui,i=1mrivi)δ,

and in fact |{x[n]:(u1(x),,um(x))(v1(x),,vm(x))}|δn.

Lemma 6.

Let C be a linear code with relative distance d, blocklength n, and proximity gap with respect to distribution 𝖦𝖾𝗇, error 𝖾𝗋𝗋, and proximity bound 𝖡. For δ𝖡 and vectors f1,,fmFn, suppose the following holds:

Pr𝗋𝗇𝖽[Δ(i=1mrifi,C)δ:(r1,,rm)𝖦𝖾𝗇(𝗋𝗇𝖽)]>𝖾𝗋𝗋(m)

Then there exists a subdomain L[n] and codewords g1,,gmC such that the following holds. First, |L|/[n]1δ. Second, for all i[m], fi and gi agree on L.

Construction

We use linear codes with efficient unique decoding up to 𝖡=O(1), the error bound of the proximity gap. Fix δ𝖡/ds; this guarantees that dsδ𝖡 which is smaller than the unique decoding radius. The modified construction is shown below; all changes are highlighted in blue. We omit the description of unchanged algorithms from Figure 3.

CastInputρ(τ,𝗊𝗑=(x,𝖼𝗆)):

  1. 1.

    For each j[μ], query r(j)ρ𝖠𝖱𝖦(τ,x,𝖼𝗆(1),,𝖼𝗆(j)).

  2. 2.

    Collect x and 𝐫 into a vector x.

  3. 3.

    Output 𝖺𝖼𝖼.x=(0,x,𝖼𝗆,𝖼𝗆), where 𝖼𝗆 are commitments to zero vectors.

Proveρ(𝖺𝗉𝗄,[ei,xi,𝖼𝗆i,𝖺𝗎𝗑i,𝖼𝗆i,𝖺𝗎𝗑i]i=1m):

  1. 1.

    For each i[m], let 𝐟i𝖵𝖢.𝖠𝗇𝗌𝗐𝖾𝗋ρH(𝗏𝗉,𝖼𝗆i,𝖺𝗎𝗑i), and 𝐟i𝖵𝖢.𝖠𝗇𝗌𝗐𝖾𝗋ρH(𝗏𝗉,𝖼𝗆i,𝖺𝗎𝗑i).

  2. 2.

    Compute the univariate polynomial qF[X] of degree at most d(m1)m such that

    p(i=1mLi,H(X)(xi,𝐟^i))=vH(X)q(X)+i=1nLi,H(X)ei.
  3. 3.

    Query α,𝗋𝗇𝖽ρ𝖠𝖲(τ,[𝖺𝖼𝖼i.x]i=1m,q), where αF and 𝗋𝗇𝖽 are sampled uniformly.

  4. 4.

    Compute e=v(α)q(α)+iLi(α)ei.

  5. 5.

    Compute (x,𝐟^)=iLi(α)(xi,𝐟^i).

  6. 6.

    Let (r1,,rm,r1,,rm)𝖦𝖾𝗇(𝗋𝗇𝖽). Compute 𝐟=i=1mri𝐟i+ri𝐟i.

  7. 7.

    Let (𝖼𝗆,𝖺𝗎𝗑)𝖵𝖢.𝖢𝗈𝗆𝗆𝗂𝗍ρH(𝗏𝗉,𝐟), and (𝖼𝗆,𝖺𝗎𝗑)𝖵𝖢.𝖢𝗈𝗆𝗆𝗂𝗍ρH(𝗏𝗉,𝐟).

  8. 8.

    Set 𝖺𝖼𝖼.x=(e,x,𝖼𝗆,𝖼𝗆) and 𝖺𝖼𝖼.w=(𝖺𝗎𝗑,𝖺𝗎𝗑).

  9. 9.

    Query 𝐐ρ𝖠𝖲(τ,[𝖺𝖼𝖼i.x]i=1m,q,𝖺𝖼𝖼.x), where 𝒬(j) is a uniformly sampled t-sized subset of [n].

  10. 10.

    For each i[m], let 𝗈𝗉i𝖵𝖢.𝖮𝗉𝖾𝗇ρH(𝗏𝗉,𝖺𝗎𝗑i,𝐐), and 𝗈𝗉i𝖵𝖢.𝖮𝗉𝖾𝗇ρH(𝗏𝗉,𝖺𝗎𝗑i,𝐐).

  11. 11.

    Let 𝗈𝗉𝖵𝖢.𝖮𝗉𝖾𝗇ρH(𝗏𝗉,𝖺𝗎𝗑,𝐐), and 𝗈𝗉𝖵𝖢.𝖮𝗉𝖾𝗇ρH(𝗏𝗉,𝖺𝗎𝗑,𝐐).

  12. 12.

    Output 𝖺𝖼𝖼 and 𝗉𝖿=(q,[𝗈𝗉i,𝗈𝗉i]i=1m,𝗈𝗉,𝗈𝗉).

Verifyρ(𝖺𝗏𝗄,[ei,xi,𝖼𝗆i,𝖼𝗆i]i=1m,(e,x,𝖼𝗆,𝖼𝗆),(q,[𝗈𝗉i,𝗈𝗉i]i=1m,𝗈𝗉,𝗈𝗉)):

  1. 1.

    Query α,𝗋𝗇𝖽ρ𝖠𝖲(τ,[𝖺𝖼𝖼i.x]i=1m,q).

  2. 2.

    Query 𝐐ρ𝖠𝖲(τ,[𝖺𝖼𝖼i.x]i=1m,q,𝖺𝖼𝖼.x).

  3. 3.

    For each i[m], let 𝐯i=𝖵𝖢.𝖠𝗇𝗌𝗐𝖾𝗋ρH(𝗏𝗉,𝖼𝗆i,𝗈𝗉i). If 𝐯i[𝐐] contains , reject.

  4. 4.

    Let 𝐯=𝖵𝖢.𝖠𝗇𝗌𝗐𝖾𝗋ρH(𝗏𝗉,𝖼𝗆,𝗈𝗉,𝐐). If 𝐯[𝐐] contains , reject.

  5. 5.

    For each i[m], let 𝐯i=𝖵𝖢.𝖠𝗇𝗌𝗐𝖾𝗋ρH(𝗏𝗉,𝖼𝗆i,𝗈𝗉i). If 𝐯i[𝐐] contains , reject.

  6. 6.

    Let 𝐯=𝖵𝖢.𝖠𝗇𝗌𝗐𝖾𝗋ρH(𝗏𝗉,𝖼𝗆,𝗈𝗉,𝐐). If 𝐯[𝐐] contains , reject.

  7. 7.

    Verify that e=vH(α)q(α)+i=1nLi,H(α)ei.

  8. 8.

    Verify that (x,𝐯)=i=1mLi,H(α)(xi,𝐯i).

  9. 9.

    Let (r1,,rm,r1,,rm)𝖦𝖾𝗇(𝗋𝗇𝖽). Verify that 𝐯=i=1mrivi+rivi.

Dsρ(𝖽𝗄,𝖺𝖼𝖼.x=(e,x,𝖼𝗆,𝖼𝗆),𝖺𝖼𝖼.w=(𝖺𝗎𝗑,𝖺𝗎𝗑)):

  1. 1.

    Let 𝐟𝖵𝖢.𝖠𝗇𝗌𝗐𝖾𝗋ρH(𝗏𝗉,𝖼𝗆,𝖺𝗎𝗑).

  2. 2.

    Let 𝐟𝖵𝖢.𝖠𝗇𝗌𝗐𝖾𝗋ρH(𝗏𝗉,𝖼𝗆,𝖺𝗎𝗑).

  3. 3.

    Find 𝐠,𝐠𝐂 such that Δ(𝐟,𝐠),Δ(𝐟,𝐠)(s1)δ by decoding 𝐟 and 𝐟. If no such codewords exists, reject.

  4. 4.

    Accept if p(x,𝐠^)=e.

References