Strong Locally Testable Codes with Relaxed Local Decoders

Locally testable codes (LTCs) are error-correcting codes that admit very efficient codeword tests. An LTC is said to be strong if it has a proximity-oblivious tester, that is, a tester that makes only a constant number of queries and rejects non-codewords with a probability that depends solely on their distance from the code. Locally decodable codes (LDCs) are complementary to LTCs. While the latter allow for highly efficient rejection of strings that are far from being codewords, LDCs allow for highly efficient recovery of individual bits of the information that is encoded in strings that are close to being codewords. Constructions of strong-LTCs with nearly-linear length are known, but the existence of a constant-query LDC with polynomial length is a major open problem. In an attempt to bypass this barrier, Ben-Sasson et al. (SICOMP 2006) introduced a natural relaxation of local decodability, called relaxed-LDCs. This notion requires local recovery of nearly all individual information-bits, yet allows for recovery-failure (but not error) on the rest. Ben-Sasson et al. constructed a constant-query relaxed-LDC with nearly-linear length (i.e., length k1+α for an arbitrarily small constant α > 0, where k is the dimension of the code). This work focuses on obtaining strong testability and relaxed decodability simultaneously. We construct a family of binary linear codes of nearly-linear length that are both strong-LTCs (with one-sided error) and constant-query relaxed-LDCs. This improves upon the previously known constructions, which either obtain only weak LTCs or require polynomial length. Our construction heavily relies on tensor codes and PCPs. In particular, we provide strong canonical PCPs of proximity for membership in any linear code with constant rate and relative distance. Loosely speaking, these are PCPs of proximity wherein the verifier is proximity oblivious (similarly to strong-LTCs) and every valid statement has a unique canonical proof. Furthermore, the verifier is required to reject non-canonical proofs (even for valid statements). As an application, we improve the best known separation result between the complexity of decision and verification in the setting of property testing.


Introduction
Locally testable codes (LTCs) are error-correcting codes that can be tested very efficiently.Specifically, a code is said to be an LTC if there exists a probabilistic algorithm, called a tester, that is given a proximity parameter ε > 0 and oracle access to an input string (an alleged codeword), makes a small number (e.g., poly(1/ε)) of queries to the input and is required to accept valid codewords, and reject with high probability input strings that are ε-far from being a codeword (i.e., reject strings that disagree with any codeword on ε fraction of the bits).The systematic study of LTCs was initiated by Goldreich and Sudan [GS06], though the notion was mentioned, in passing, a few years earlier by Friedl and Sudan [FS95] and Rubinfeld and Sudan [RS96].
A natural strengthening of the notion of locally testable codes (LTCs) is known as strong-LTCs.While LTCs (also referred to as weak-LTCs) allow for a different behavior of the tester for different values of the proximity parameter, strong-LTCs are required to satisfy a strong uniformity condition over all values of the proximity parameter.In more detail, the tester of a strong-LTC does not get a proximity parameter as an input, and is instead required to make only a constant number of queries and reject non-codewords with probability that is related to their distance from the code.See [GS06,Gol10] for a discussion on both types of local testability.We note that from a property testing point of view, strong-LTCs can be thought of as codes that can be tested by a proximityoblivious tester (see [GR09]).
The two most fundamental parameters of error-correcting codes (and strong-LTCs in particular) are the distance and the codeword length.Throughout this work we will only consider codes with constant relative distance, and so our main parameter of interest is the length, which measures the amount of redundancy of information in each codeword.By this criterion, constructing a strong-LTC with linear length (and constant relative distance) is the holy grail of designing efficient locally testable codes.Although recently some progress was made towards showing the impossibility of such linear length LTCs [DK11,BV12], there are known constructions of strong-LTCs with relatively good parameters: Goldreich and Sudan [GS06] constructed a strong-LTC with constant relative distance and nearly-linear length, where throughout this paper a code of dimension k is said to have nearly-linear length if its codewords are of length k 1+α for an arbitrarily small constant α > 0. Furthermore, recently Viderman [Vid13] constructed a strong-LTC with constant relative distance and quasilinear length (i.e., length k • polylogk).
Another natural local property of codes is local decodability.A code is said to be a locally decodable code (LDC) if it allows for a highly efficient recovery of any individual bit of the message encoded in a somewhat corrupted codeword.That is, there exists a probabilistic algorithm, called a decoder, that is given a location i and oracle access to an input string w that is promised to be sufficiently close to a codeword.The decoder is allowed to make a small (usually constant) number of queries to the input w and is required to decode the i th bit of the information that corresponds to the codeword that w is closest to.Following the work of Katz and Trevisan [KT00] that formally defined the notion of LDCs, these codes received much attention and found numerous applications (see e.g., [Tre04,Yek12] and references therein).They are also related to private information retrieval protocols [CGKS98] (see [Gas04] for a survey).
Despite much attention that LDCs received in recent years, the best known LDCs are of superpolynomial length (cf.[Efr12], building on [Yek08]).While the best known lower bound (cf.[KT00]) only shows that any q-query LDC must be of length Ω k (where k is the dimension of the code), the existence of a constant-query LDC with polynomial length remains a major open problem.
In an attempt to bypass this barrier, Ben-Sasson et al. [BGH + 06] introduced a natural relaxation of the notion of local decodability, known as relaxed-LDCs.This relaxation requires local recovery of most (or nearly all) individual information-bits, yet allows for recovery-failure (but not error) on the rest.Specifically, a code is said to be a relaxed-LDC if there exists an algorithm, called a (relaxed) decoder, that has oracle access to an input string that is promised to be sufficiently close to a codeword.Similarly to LDCs, the decoder is allowed to make few queries to the input in attempt to decode a given location in the message.However, unlike LDCs, the relaxed decoder is allowed to output an abort symbol on a small fraction of the locations, which indicates that the decoder detected a corruption in the codeword and is unable to decode this specific information-bit.Note that the decoder must still avoid errors (with high probability).
Throughout this work, unless explicitly stated otherwise, when we say that a code is a relaxed-LDC, we actually mean that it is a relaxed-LDC with constant query complexity.
[BGH + 06] constructed a relaxed-LDC with nearly-linear length.More generally, they showed that for every constant α > 0 there exists a relaxed-LDC (with constant relative distance) that maps k-bit messages to k 1+α -bit codewords and has query complexity O 1/α 2 .While these relaxed-LDCs are dramatically shorter than any known LDC, they do not break the currently known lower bound on LDCs (cf.[KT00]), and hence it is it still an open question whether relaxed-LDC are a strict relaxation of LDCs.

Obtaining Local Testability and Decodability Simultaneously
In this work, we are interested in short codes that are both (strongly) locally testable and (relaxed) locally decodable. 1The motivation behind such codes is very natural, as the notion of local decodability is complimentary to the notion of local testability: The success of the decoding procedure of a locally decodable code is pending on the promise that the input is sufficiently close to a valid codeword.If the locally decodable code is also locally testable, then this promise can be verified by the testing procedure.However, recall that there are no known constant-query LDCs with even polynomial length, let alone such that are also locally testable.Hence, we focus on relaxed-LDCs. 2 There are a couple of known constructions of codes that are both locally testable and relaxed decodable (with constant query complexity).Ben-Sasson et al. [BGH + 06] observed that their relaxed-LDC can be modified to also be a weak-LTC (i.e., an LTC that is not strong), while keeping its length nearly-linear.However, the local testability of their code is inherently weak (see Section 1.3 for details).In a recent development, Gur and Rothblum [GR13] constructed a relaxed-LDC that is also a strong-LTC, albeit with polynomial length.
In this paper, we improve upon the aforementioned results of [BGH + 06] and [GR13], achieving the best of both worlds.That is, we construct a code that is both a strong-LTC and a relaxed-LDC with nearly-linear length.
Theorem 1.1 (informal).There exists a binary linear code that is a relaxed-LDC and a (one-sided error) strong-LTC with constant relative distance and nearly-linear length.
A formal statement of Theorem 1.1 is given in Section 3. We remark that we actually prove a slightly stronger claim; namely, that any good linear code can be augmented (by appending additional bits to each codeword) into a code that is both a relaxed-LDC and a strong-LTC, at the cost of increasing the codeword length from linear to nearly-linear.

Strong Canonical PCPs of Proximity
The notion of PCPs of proximity plays a major role in many constructions of LTCs and relaxed-LDCs, as well as in our own.Loosely speaking, PCPs of proximity (PCPPs) are a variant of PCP proof systems, which can be thought of as the PCP analogue of property testing.Recall that a standard PCP is given explicit access to a statement (i.e., an input that is supposedly in some NP language) and oracle access to a proof (i.e., a "probabilistically checkable" NP witness).The PCP verifier is required to probabilistically verify whether the (explicitly given) statement is correct, by making few queries to the alleged proof.In contrast, a PCPP is given oracle access to a statement and to a proof, and is only allowed to make a small number of queries to both the statement and the proof.Since a PCPP verifier only sees a small part of the statement (typically, only a constant number of bits), it cannot be expected to verify the statement precisely.Instead, it is required only to accept correct statements and reject statements that are far from being correct (i.e., far in Hamming distance from any valid statement).
PCPs of proximity were first studied by Ben-Sasson et al. [BGH + 06] and by Dinur and Reingold [DR06] (wherein they are called assignment testers).The main parameters of interest in a PCPP system for some language L are its query complexity (i.e., the total number of queries to the input and to the proof that the PCPP verifier makes in order to determine membership in L) and its proof length, which can be thought as measuring the amount of redundancy of information in the proof.Ben-Sasson et al. [BGH + 06] showed a PCPP for any language in NP, with constant query complexity and nearly-linear length (in fact, the length is n 1+o(1) , where n is the length of the corresponding NP-witness).
As we have already noted, PCPPs have a central theoretical significance as the property testing analogue of PCP proof-systems.Moreover, PCPPs were shown to be useful in various applications, e.g., for PCP composition and alphabet reduction [BGH + 06, DR06], and for locally testable and locally decodable codes [BGH + 06, GS06, GR13].Further information regarding the latter application follows.
The notion of locally testable codes and PCPs of proximity are tightly connected.Not only that PCPPs (and PCPs in general) can be thought of as the computational analogue of the (combinatorial) notion of LTCs, but also any code can be made locally testable by using an adequate PCPP.Specifically, Ben-Sasson et al. [BGH + 06] showed that any linear code can be transformed to a (weak) LTC by appending each codeword with a PCPP proof that ascertains that the codeword is indeed properly encoded. 3However, since there is no guarantee that every two different proofs for the same statement are far (in Hamming distance), in order to prevent deterioration of the distance of the code two additional steps are taken: Firstly, the appended PCPP proof should be uniquely determined per codeword (i.e., each codeword has a canonical proof), and secondly, each codeword is repeated many times so that the PCPP part constitutes only a small fraction of the total length.
The drawback of the foregoing approach is that it results in locally testable codes that are inherently weak (i.e., codes that do not allow for proximity-oblivious testing).To see this, note that PCPPs only guarantee that false assertions are rejected (with high probability), while true assertions can be accepted even if the proof is incorrect.Hence, corruptions in the PCPP part are not necessarily detectable and the canonicity of the PCPP proofs may not be verified, ruling out the possibility of a (strong) tester that is uniform over all possible values of the proximity parameter. 4oreover, when trying to build strong-LTCs, an additional problem that arises is that, by definition, PCPPs do not necessarily provide strong soundness, i.e., reject false proofs with probability that depends only on their distance from a correct proof.
Motivated by constructing strong locally testable codes, Goldreich and Sudan [GS06, Section 5.3] considered a natural strengthening of the notion of PCPPs, known as strong canonical PCPs of proximity (hereafter scPCPP), which addresses the aforementioned issues.Loosely speaking, scPCPP are PCPPs with strong soundness that are required to reject "wrong" proofs, even for correct statements.Moreover, they require that each correct statement will only have one acceptable proof.In more detail, scPCPP are PCPP with two additional requirements: (1) canonicity: for every true statement there exists a unique proof (called the canonical proof) that the verifier is required to accept, and any other proof (even for a correct statement) must be rejected, and (2) strong soundness: the scPCPP verifier is required to be proximity oblivious and reject any pair of statement and proof with probability that is related to its distance from a true statement and its corresponding canonical proof.A formal definition of scPCPPs can be found in Section 2.4.
Given a construction of adequate scPCPPs, the aforementioned strategy of appending each codeword with an efficient scPCPP (which ascertains membership in a code) will allow to transform any code to a strong-LTC.Unfortunately, unlike standard PCPPs, for which there are efficient constructions for any language in NP, there are no known constructions of general-purpose scPCPPs.Yet, Goldreich and Sudan constructed a mechanism, called linear inner proof systems (LIPS), which is closely related to some special-purpose scPCPPs.Loosely speaking, the LIPS mechanism allows to transform linear strong locally testable codes over a large alphabet into strong locally testable codes over a smaller alphabet (see [GS06, Section 5.2] for further details).By a highly non-trivial construction and usage of the LIPS mechanism, Goldreich and Sudan showed efficient constructions of strong-LTCs.Unfourtunately, their constructions do not meet our needs.Nevertheless, building upon their techniques, we show strong canonical PCPs of proximity with polynomial length for any good linear code.
Theorem 1.2 (scPCPP for good codes -informal).Let C be a linear code with constant relative distance and linear length.Then, there exists a scPCPP with polynomial proof length for membership in the set of all codewords of C.
In fact, we actually prove a slightly stronger statement.Specifically, our scPCPPs satisfy two additional properties that will be useful for our main construction: The scPCPP proofs are linear (over GF(2)), and the queries that the verifier makes are roughly uniform.We remark that not only that the scPCPPs in Theorem 1.2 are crucial to our construction (see Section 1.4 for details), we also view these scPCPPs as interesting on their own.A formal statement of Theorem 1.2 and its proof are presented in Section 6.

Previous Works and Techniques
In this subsection, we survey the previous works and techniques regarding relaxed-LDCs upon which we build.We start by recalling the construction of the (nearly) quadratic length relaxed-LDC of Ben-Sasson et al. [BGH + 06, Section 4.2].The core idea that underlies their construction is to partition each codeword into three parts: The first providing the distance property, the second allowing for "local decodability", and the third ascertaining the consistency of the first two parts.The natural decoder for such a code will verify the consistency of the first two parts via the third part and decode according to the second part in case it detects no consistency error.Details follow.
Let C be any good linear code (i.e., a code with constant relative distance and linear length).Ben-Sasson et al. construct a new code C whose codewords consist of three parts of equal length: (1) repetitions of a good codeword C(x) that encodes the message x; (2) repetitions of the explicitly given message x; and (3) PCPPs that ascertain the consistency of each individual bit in the message x (which is explicitly given in the second part) with the codeword C(x) (which is explicitly given in the first part).We remark that since the total length of the PCPPs is significantly longer than the statements they ascertain, the desired length proportions are obtained by the repetitions in the first two parts.Observe that the first part grants the new code C good distance (although it may not be locally decodable), the second part allows for a highly efficient decoding of the message (at the cost of reducing the distance), and the third part is needed in order to guarantee that the first two parts refer to the same message.The (relaxed) decoder for C will use the PCPPs in the third part in order to verify that the first part (the codeword C(x)) is consistent with the bit we wish to decode in the second part (the message x).If the PCPP verifier detects no error, the decoder returns the relevant bit in the second part; otherwise, it returns an abort symbol.
In order to implement the aforementioned relaxed-LDC, an adequate PCPP is needed; that is, an efficient PCPP for verifying the consistency of each individual bit in a message x with the codeword C(x).We note that such statements are in P. Recall that Ben-Sasson et al. [BGH + 06, Section 3] showed PCPPs with nearly-linear length for any language in NP.Hence, the consistency of each message bit with a codeword of C can be guarantied by a PCPP of length that is nearly-linear in the length of C. Since C is obtained by augmenting a good linear code C with a single PCPP proof per every message bit (claiming consistency between that bit and the codeword of C), the length of C is (nearly) quadratic (i.e., length k 2+α for an arbitrarily small constant α > 0, where k is the dimension of the code).We note that Ben-Sasson et al. showed that the length of C can be improved to nearly-linear by, roughly speaking, partitioning the message into blocks of various lengths and decoding based on a chain of consistent blocks. 5ecall that any code can be transformed to a weak locally testable code by appending adequate PCPPs to it (See [BGH + 06, Section 4.1]).Applying this transformation to the relaxed-LDC does not hamper the relaxed decodability of the code, and only increase its length by a moderate amount (since the PCPPs are of nearly-linear length); hence this transformation yields a (constant query) relaxed-LDC with nearly-linear length that is also a (weak) LTC.We stress that the aforementioned transformation yields local testability that is inherently weak due to the fact that it uses standard PCPPs.However, if the PCPPs in use were actually scPCPPs (of nearly-linear length), then the foregoing code would have been strongly testable.
In a recent work, Gur and Rothblum [GR13] constructed scPCPPs with polynomial length for the particular family of linear length statements that are needed for the [BGH + 06] relaxed-LDC.By using these scPCPPs in the construction of [BGH + 06], they obtained a relaxed-LDC that is also a strong-LTC, albeit with polynomial length (due to the length of their scPCPPs).While we conjecture it is feasible to construct nearly-linear length scPCPPs for P (which contains the family of statements we wish to have scPCPPs for) and even for unique-NP (also known as the class US), 6 we do not obtain such scPCPPs here.Instead, we take an alternative approach, which circumvents this challenge, as described in the next subsection.

Our Techniques
In this subsection, we present our main techniques and ideas for constructing a relaxed-LDC with nearly-linear length that is also a strong-LTC.Our starting point is the (weakly testable) relaxed-LDC construction of Ben-Sasson et al. [BGH + 06].However, we wish to replace the PCPPs that they use with scPCPPs, in order to achieve strong local testability.
Since we do not have general-purpose scPCPPs (let alone of near-linear length), we construct special-purpose scPCPPs that allow us to ascertain the particular statements we are interested in (see Theorem 1.2).It is crucial to note that the scPCPPs we are able to construct are with polynomial proof length (and not nearly-linear length, as we would have hoped).Recall that the statements that are needed for the construction of Ben-Sasson et al. (i.e., ascertaining the consistency of each bit of the message with the entire codeword for decodability, and ascertaining the validity of the codeword for testability) are linear in the length of the message.Therefore, applying our scPCPPs in a naive way (i.e., replacing the PCPPs in the construction of Ben-Sasson et al. with our scPCPPs) would yield codes with polynomial length, whereas we are aiming for nearly-linear length.Instead, we use an alternative approach.
The key idea is to provide scPCPPs that only refer to sufficiently short statements such that even with the polynomial blow-up of the scPCPP, the length of each proof would still be sub-linear.Specifically, instead of providing proofs for the validity of the entire codeword and the consistency of each message bit with the entire codeword (as in [BGH + 06]), we provide proofs for the consistency of each message bit with "small" parts of the code and for the validity of these small parts.If each part is sufficiently small (i.e., of length k α for an arbitrarily small constant α > 0, where k is the length of the message), then we can still obtain a code with nearly-linear length, even when providing polynomial length proofs for all of the small parts.
The caveat, however, is that proving that each message bit is consistent with a small part (or local view ) of a codeword does not necessary imply that the message bit is consistent with the entire codeword.Similarly, partitioning a codeword into small parts and proving the validity of each part does not imply the validity of the entire codeword.Therefore, we need the base code (to which we append scPCPPs) to be highly structured so that, loosely speaking, the local consistency and validity we are able to ascertain can be used to enforce global consistency and validity.Concretely, the strategy we employ is using tensor codes and proving that this family of codes has features that allow us to overcome the aforementioned caveat.Details follow.
Given a linear code C : {0, 1} k → {0, 1} n , the tensor code C ⊗ C : {0, 1} k 2 → {0, 1} n 2 consists of all n × n matrices whose rows and columns are codewords of C. Similarly, the d-dimensional -dimensional tensors such that each (axis-parallel) line in the tensor is a codeword of C.7 (See Section 2.3 for the exact definitions.) Towards obtaining relaxed local decodability, we show that tensor codes satisfy a feature, which we call local propagation, that allows us to verify global consistency statements (such as the ones that are used in the [BGH + 06] relaxed-LDC) by verifying local consistency statements, which we can afford to prove with polynomial length scPCPPs; the local propagation feature of tensor codes is discussed in Section 4.1.Hence, we can ascertain that the value at each point in the tensor is consistent with the entire codeword by verifying the consistency of a constant number of randomly selected statements regarding small parts of the tensor (specifically, statements of consistency between the value at a point in the tensor and a line that passes through it).We remark that Theorem 1.2 can be used to derive polynomial-length scPCPPs for such statements (see Section 6).Therefore, we can replace the nearly-linear length PCPPs that are used in [BGH + 06] with our polynomial length scPCPPs, while preserving the functionality of relaxed local decoding and keeping the total length of the construction nearly-linear.(See Sections 4.1 and 4.2 for a more detailed high-level description of our approach, followed by a full proof in Section 4.3.) Recapping, so far our construction is as follows.Let C be a good linear code and d ∈ N be a sufficiently large constant.Each codeword of our code consists of the following equal-length parts: (1) repetitions of the tensor codeword C ⊗d (x) that encodes the message x; (2) repetitions of the explicitly given message x; and (3) scPCPPs for small statements (specifically, regarding the consistency of each point in the tensor C ⊗d (x) with each line that passes through it), which are used to ascertain the consistency of each individual bit in the message x with the codeword C ⊗d (x). 8inally, we augment the aforementioned construction with a forth and last part that allows us to obtain strong local testability.The naive approach is to append a scPCPP that ascertains the validity of all three parts of our code.However, since the length of our scPCPPs is polynomial in the length of the statement, this approach would yield codes with long (polynomial) length.Instead, recall that we can (strongly) test the consistency of the first two parts via the third part (which is also strongly testable, since it is a scPCPP).Thus, in order to obtain strong local testability it suffices to ascertain that the first part is a valid codeword of C ⊗d using scPCPPs.Luckily, tensor codes also satisfy the robustness feature, which allows us to ascertain the validity of an entire codeword of C ⊗d by ascertaining the validity of small parts of the codeword.Detail follows.
Loosely speaking, a code is said to be robust if the corruption in a random "local view" of a codeword is proportional to the corruption in the entire codeword.In more detail, we use a recent result of Viderman [Vid12] (building on [BS06]) that states that the corruption in a random 2-dimensional (axis-parallel) plane of a corrupted codeword of a binary tensor code C ⊗d (where d ≥ 3) is proportional to the corruption in the entire codeword.This feature allows us to ascertain the validity of the first part (i.e., the tensor codeword C ⊗d (x)) by only providing scPCPPs for short statements that refer to 2-dimensional planes in C ⊗d (x).(See Section 5 for a more detailed high-level description, followed by a full proof.)

Applications to Property Testing
As an application of our main result (Theorem 1.1) we improve on the best known separation result (due to [GR13]) between the complexity of decision and verification in the setting of property testing.
The study of property testing, initiated by Rubinfeld and Sudan [RS96] and Goldreich, Goldwasser and Ron [GGR98], considers highly-efficient randomized algorithms that solve approximate decision problems, while only inspecting a small fraction of the input.Recently, Gur and Rothblum [GR13] initiated the study of MA proofs of proximity (hereafter MAPs), which can be viewed as the NP analogue of property testing.They reduced the task of separating the power of property testers and MAPs to the design of very local codes, both in terms of testability and decodability.Furthermore, they noticed that for such a separation, relaxed decodability would suffice.
Gur and Rothblum used several weaker codes to obtain weaker separation results than the one we obtain here.Specifically, they either show a smaller gap between the query complexity of testers and MAPs, or show a separation for a limited range of the proximity parameter.In contrast, by plugging-in the code of Theorem 1.1, we obtain the best known (exponential) separation result between the power of MAPs and property testers.

Theorem 1.3 (Informal).
There exists a property that requires n 0.999 queries for every property tester but has an MAP that uses a proof of logarithmic length and makes poly(1/ε) queries.
For more information regarding this application, we refer the reader to Section 7.

Organization.
In Section 2 we provide the preliminaries.In Section 3 we describe the construction of the codes that establish Theorem 1.1.In Section 4 and Section 5 we establish the relaxed local decodability and strong local testability (respectively) of the codes.In Section 6 we construct the scPCPPs needed for our construction, and finally, in Section 7 we present an application of our codes for property testing.

Preliminaries
We start with some general notation.We denote by [n] the set of numbers {1, 2, . . ., n}.For i ∈ [n] and for x ∈ {0, 1} n , denote by x i the i th bit of x.For x, y ∈ {0, 1} n , we denote by ∆(x, y) the Hamming distance between x and y, and denote by δ(x, y) the relative (Hamming) distance between x and y, i.e., δ(x, y) = ∆(x, y)/n.We say that x is δ-close to (respectively, δ-far from) y if the relative distance between x and y is at most δ (respectively, at least δ).
Given a set S, we denote by s∈ R S the distribution that is obtained by selecting uniformly at random s ∈ S. For a randomized algorithm A, we write Pr A [•] (or EA [•]) to state that the probability (or expectation) is over the internal randomness of the algorithm A.
(Non) Uniformity.Throughout this paper, for the simplification of the presentation, we formally treat algorithms (testers, decoders, and verifiers) as (non-uniform) polynomial-size circuits.We note, however, that all of our algorithms can be made uniform by making straightforward modifications.Furthermore, it will be convenient for us to view the length n ∈ N of objects as fixed.We note that although we fix n, it should be viewed as a generic parameter, and so we allow ourselves to write asymptotic expressions such as poly(n), O(n), etc.In contrast, when we say that something is a constant, we mean that it is independent of the length parameter n.

Error Correcting Codes
Let k, n ∈ N. A binary linear code C : {0, 1} k → {0, 1} n of distance d is a linear mapping over GF(2), which maps messages to codewords, such that the Hamming distance between any two codewords is at least d = d(n).The relative distance of C, denoted by δ(C), is given by d/n.The length of a code is n = n(k).By slightly abusing notation, we say that we can construct a code C with nearly linear length if for any constant α > 0 we can construct a code C : {0, 1} k → {0, 1} n , where n = k 1+α .For any x ∈ {0, 1} n , denote the relative distance of x to the code C by δ C (x) = min y∈C {δ(x, y)}.
We say that C is systematic, if the first k bits of every codeword of C contain the message; that is, if for every x ∈ {0, 1} k and every i ∈ [k] it holds that C(x) i = x i .Since C is a linear code, we may assume without loss of generality that it is systematic.

Local Testability and Decodability
Following the discussion in the introduction, strong locally testable codes are defined as follows.

Strong Soundness
We say that a tester makes nearly-uniform queries if it queries each bit in the (alleged) codeword input w ∈ {0, 1} n with probability Θ(1/n).
Following the discussion in the introduction, relaxed locally decodable codes are defined as follows.
Definition 2.2 (relaxed-LDC).A code C : {0, 1} k → {0, 1} n is a relaxed-LDC if there exists a constant δ radius ∈ (0, δ(C)/2), a constant ρ > 0 and a probabilistic algorithm (decoder) D that, given oracle access to w ∈ {0, 1} n and explicit input i ∈ [k], makes O(1) queries to w, and satisfies: 2. Relaxed Soundness: For any i ∈ [k] and any w ∈ {0, 1} n that is δ radius -close to a codeword C(x),9 it holds that Pr 3. Success Rate: For every w ∈ {0, 1} n that is δ radius -close to a codeword C(x), and for at least a ρ fraction of the indices i ∈ [k], with probability at least 2/3 the decoder D outputs the i th bit of x.That is, there exists a set I w ⊆ [l] of size at least ρk such that for every i ∈ I w it holds that We remark that our definition is slightly stronger than the one given in [BGH + 06] as we require prefect completeness (i.e., that the decoder always outputs the correct value given oracle access to a valid codeword of the code C).

Tensor Codes
Tensor codes are defined as follows.
is the code whose codewords consists of all n × n matrices such that each axis-parallel line (i.e., a row or a column) in the matrix is a codeword of C. Similarly, given d ∈ N, the tensor code It is well-known that for every d ∈ N the tensor code C ⊗d is a linear code with relative distance δ(C) d (see e.g., [BS06]).Given a message x ∈ {0, 1} k d and coordinate ī = (ī 1 , . . ., īd ) ∈ [n] d , we denote the value of C ⊗d (x) at coordinate ī by C ⊗d (x) ī.
Remark 2.4.By the definition of tensor codes, if a linear code C is systematic, then the tensor code C ⊗d is also a systematic code;10 that is, for every Next, we provide notations for the restriction of tensors to lines and planes.We start by defining axis-parallel lines.
Definition 2.5 (Axis-Parallel Lines).For j ∈ [d] and ī = (i 1 , . . ., i d ) ∈ [n] d , we denote by j,ī the j th axis-parallel line passing through ī.That is, We denote by Lines(n, d) the multi-set that contains all axis-parallel lines that pass through each point Lastly, given a tensor w ∈ {0, 1} n d we denote by w| i,j ∈ {0, 1} n the restriction of w to the line i,j , i.e., the j th axis-parallel line that passes through ī.
Next, we define axis-parallel planes.
Definition 2.6 (Axis-Parallel (2-dimensional) Planes).For j 1 < j 2 ∈ [d] and ī = (i 1 , . . ., i d ) ∈ [n] d , we denote by p j 1 ,j 2 ,ī the (j 1 , j 2 ) th axis-parallel plane passing through the point ī.That is We denote by Planes(n, d) the set of all (distinct) axis-parallel planes in all directions in {0, 1} n d .12Lastly, for a tensor w ∈ {0, 1} n d and a plane p ∈ Planes(n, d) we denote by w| p ∈ {0, 1} n 2 the restriction of w to the coordinates in the plane p.
Throughout this work we deal with axis-parallel lines (respectively, axis-parallel planes); hence, for brevity, we will sometimes refer to an axis-parallel line (respectively, axis-parallel plane) simply as a line (respectively, plane).We remark that the multi-set Lines(n, d) contains d • n d lines and the set Planes(n, d) contains d 2 • n d−2 planes.We omit the parameters n and d when they are clear from the context.
Testing Tensor Codes.The next theorem, which is implicit in [Vid12], shows that for every d ≥ 3 and every linear code C, testing the tensor-code C ⊗d can be reduced to testing whether a random plane in C is a codeword of C ⊗2 .
Theorem 2.7.Let C be a linear binary code and d ≥ 3 an integer.Then, there exists a constant c robust ∈ (0, 1) such that for every tensor w ∈ {0, 1} n d it holds that Specifically, in [Vid12, Theorem A.5] it is shown that for d ≥ 3, if a codeword w of a tensor code C ⊗d is corrupted, then the corruption in a random (d−1)-dimensional subplane of w is proportional to the corruption in the entire tensor w.By applying this result recursively (a constant number of times), we obtain Theorem 2.7.For completeness, we provide the proof of Theorem 2.7 in Appendix C.

PCPs of Proximity
Strong canonical PCPs of proximity were defined as follows in [GS06, Section 5.3].
Definition 2.8 (scPCPPs).Let V be a probabilistic algorithm (verifier) that is given oracle access to an input x ∈ {0, 1} n and oracle access to a proof π ∈ {0, 1} (n) , where : N → N satisfies (n) ≤ exp poly(n) .We say that V is a strong (canonical) PCPP verifier for language L if it makes O(1) queries and satisfies the following two conditions: • Canonical Completeness: For all x ∈ L, there exists a unique canonical proof for x, denoted π canonical (x), such that the verifier always accepts the pair (x, π canonical (x)); i.e., V x,π canonical (x) = 1.
We say that a scPCPP verifier makes nearly-uniform queries if it queries each bit in the input x with probability Θ(1/|x|) and queries each bit in the proof π(x) with probability Θ(1/|π|).
We stress that these scPCPPs have one-sided error (i.e., they always accept inputs in L coupled with their canonical proofs).Note that the canonical aspect is reflected in the dependence of δ PCPP (x , π ) on ∆(π canonical (x), π ), whereas the strong-soundness aspect is reflected in the tight relation between the rejection probability and δ PCPP (x , π ).

The Main Construction
In this section we describe our construction of a family of binary linear codes that are both (constant-query) relaxed-LDCs and strong-LTCs with constant relative distance and nearly-linear length.Our codes rely heavily on special-purpose strong canonical PCPs of proximity (with polynomial proof length), which we construct in Section 6, and so, we start by stating these scPCPPs.Our first family of scPCPPs is for good linear codes.
As a corollary of Theorem 3.1, we obtain a family of scPCPPs for half-spaces of any good linear code.That is, scPCPPs that ascertain membership in the set of all codewords wherein one given location is set to a specific value (for example, all codewords that have 1 in their first location).
Theorem 3.2 (scPCPPs for half-spaces of good codes).Let C : {0, 1} k → {0, 1} n be a linear code with constant relative distance and linear length.Let i ∈ [k] be a location in a message and b ∈ {0, 1} a bit.Then, there exists a scPCPP for C i,b , where C i,b is the set of all codewords w of C such that the i th -bit of w equals b (i.e., w i = b).Furthermore, the proof length of the scPCPP is poly(n), the scPCPP verifier makes nearly-uniform queries, and the scPCPP proofs are linear (over GF(2)).
See Section 6 for the full proofs of Theorems 3.1 and 3.2.Equipped with the foregoing scPCPPs, we describe the construction of our code, which consists of three parts.
Tensor code part.Let C 0 : {0, 1} k → {0, 1} n be a systematic linear code with linear length (i.e., n = Θ(k)) and constant relative distance 0 < δ(C 0 ) < 1.Let d ≥ 3 be a sufficiently large constant (to be determined later).Let We augment the code C with scPCPPs that ascertain the validity of each plane in C (using Theorem 3.1) and scPCPPs that ascertain the consistency of each bit in C with each line that passes through it (using Theorem 3.2).Details follow.
Plane scPCPPs part.Let C(x) be a codeword of the tensor code C. For every plane p in the tensor C(x) we use our scPCPPs for good codes to prove that the restriction of C(x) to the plane p (denoted by C(x)| p ) is a codeword of C ⊗2 0 .Specifically, for a codeword w of C ⊗2 0 we denote by π plane (w) the corresponding canonical proof for the scPCPP verifier of Theorem 3.1.Then, for every message x ∈ {0, 1} k d we define π planes (x) as the sequence of the canonical proofs for all planes in C(x); that is, where Planes is the set of all (2-dimensional) axis-parallel planes in {0, 1} n d (see Definition 2.6).
We append π planes (x) to the codeword C(x).Note that 1) .We stress that the constant in the O(1) notation does not depend on d.These scPCPPs will be used for the local testability of our code (see Section 5).
Point-line scPCPPs part.Let C(x) be a codeword of the tensor code C. For every point ī = (i 1 , . . ., i d ) ∈ [n] d and every direction j ∈ [d] we use our scPCPPs for half-spaces of good codes to prove that the restriction of C(x) to the line that passes through point ī in direction j (denoted by C(x)| j,ī ) is a codeword of C 0 that is consistent with value of C(x) at point ī. 13 Specifically, for a codeword w of C 0 and index s ∈ [n] we denote by π line (w, s) the canonical proof for the scPCPP verifier of Theorem 3.2 (which corresponds to codewords of C 0 whose s th -bit equals to w s ).Then, for every message x ∈ {0, 1} k d we define π lines (x) as the set of the canonical proofs for all lines passing through each point in C(x); that is, , as in Definition 2.5 (i.e., the set Lines contains all axis-parallel lines that pass through each point ī ∈ [n] d ).
We append π lines (x) to the codeword C(x).Note that , where the constant in the O(1) notation does not depend on d.These scPCPPs will be used for the relaxed local decodability of our code (see Section 4).
Putting it all together.Our construction is obtained by combining the tensor codeword C(x) with π lines (x) and π planes (x), while ensuring that the three parts are of equal length.That is, for where t 1 , t 2 and t 3 are the minimal integers such that |C(w Length and relative-distance of C .For sufficiently large d the length of C is nearly-linear. To this end, observe that for every x ∈ {0, 1} k d it holds that |C(x)| = n d , |π lines (x)| ≤ poly(n) and |π planes (x)| ≤ poly(n 2 ).Hence, for every constant α > 0, there exists some constant d > 0 so that The code C has constant relative distance since the relative distance of C (denoted by δ(C)) is constant, and since repetitions of C constitute a third of the length of C ; that is, δ(C ) ≥ δ(C) 3 .In the next sections we prove the following theorem.
Theorem 1.1 (restated).For every constant α > 0, there exists some constant d ≥ 0 so that the code C : {0, 1} k → {0, 1} n , as defined above, is a linear binary code that is a relaxed-LDC and a strong-LTC with constant relative distance.
Specifically, in Section 4 we prove the relaxed-LDC feature of C , and in Section 5 we prove the strong-LTC feature of C .
(Alleged) Codeword Notations.Consider an arbitrary string w ∈ {0, 1} n (which we think of as an alleged codeword).We view w as a string composed of three parts (analogous to the three parts of the construction above): 1. c = (c 1 , . . ., c t 1 ) : the t 1 alleged repetitions of the tensor code part.

plines = plines
1 , . . ., plines t 2 : the t 2 alleged repetitions of the scPCPP proofs for all the point-line pairs (i.e., lines passing through all coordinates in all directions).For every i ∈ [t 2 ], the string plines i consists of scPCPP proofs for every point-line pair, i.e., plines i = {p i } ∈Lines .

Establishing the Relaxed-LDC Property
In this section we prove that the code C , which was defined in Section 3, is a relaxed locally decodable code.
Theorem 4.1.The code In order to prove Theorem 4.1, it would be convenient to use an alternative definition of relaxed-LDCs, which implies the standard definition (Definition 2.2) by applying known transformations.Specifically, in Appendix D (following [BGH + 06, Section 4.2]) we show that it suffices to relax the soundness parameter in Definition 2.2 to Ω(1) (instead of 2/3), and replace the success rate condition with the following average smoothness condition.Loosely speaking, average smoothness requires that the decoder makes nearly uniform queries on average (over all indices to be decoded).By the foregoing, to prove Theorem 4.1 it suffices to show that the code C satisfies the following definition.
3. Average Smoothness: for every w ∈ {0, where D w (i, j, r) denotes the distribution of the j th query of the decoder D w on coordinate i and coin tosses r, where the probability is taken uniformly over all possible choices of i ∈ [k], j ∈ [q], and coin tosses r.
We remark that in [BGH + 06, Section 4.2], the definition of average smoothness also requires a matching lower bound, i.e., the decoder should satisfy 1 2n < Pr i,j,r [D w (i, j, r) = v] < 2 n .However, for our applications it suffices to only require the upper bound.We note that the lower bound can be easily obtained by adding (random) dummy queries.
We start by showing a decoder that satisfies the first two aforementioned conditions (i.e., the completeness condition and the modified relaxed soundness).Next, in Section 4.4 we show how to obtain a related decoder that also satisfies the average smoothness condition.
The Setting.Consider an arbitrary input w ∈ {0, 1} n such that 0 ≤ δ C (w) < δ radius .We view w as a string composed of three parts as in Section 3, i.e., w = (c, plines , pplanes ).We stress that any part of w might suffer from corruptions, and so, we have to be able to decode correctly assuming that not too many corruptions have occurred (i.e., less than δ radius fraction).Denote by x the unique string such that w is δ C (w)-close to C(x) (see Footnote 9).

Overview
Recall that a valid codeword of C consists of three (repeated) parts: (1) a systematic tensor code C, (2) point-line scPCPPs, and (3) plane scPCPPs.Our general approach is to decode according to the prefix of the first part (which allegedly contains the message x explicitly (since we use a systematic code), and to use the second part to ensure that each bit in message x is consistent with the rest of the (tensor) codeword C(x).(The third part is not used here; it is only used for the testability of the code.)Thus, the task of (relaxed) decoding the i th bit of the message is reduced to verifying that the explicitly given value of the i th bit of the message is consistent with the rest of the codeword.
Towards this end, recall that the second part of each codeword contains scPCPPs that ascertain the consistency of each bit in the tensor with each line that passes through it, but not consistency with the entire tensor.Therefore, in order to verify the consistency of each message bit with the entire codeword, our decoder uses a feature of tensor codes, which we call local propagation.This feature allows us to verify the consistency of a single message bit with the entire codeword by verifying the consistency of a carefully chosen sequence of d point-line pairs (using the point-line scPCPP).Details follow.
Loosely speaking, the local propagation feature of tensor codes implies that if one corrupts a single point in a codeword and attempts to keep most local views (say, lines in the tensor) consistent with this corruption, then a chain of highly structured modifications must be made that causes the "corruption" to propagate throughout the entire tensor.This is best exemplified by our decoder, which is tailored to take advantage of the foregoing phenomena.
Our decoder is given a coordinate ī = (i 1 , . . ., i d ) ∈ [k] d and oracle access to an alleged codeword w as above.The decoder looks for "inconsistencies" in w and if it finds any, it outputs ⊥.Otherwise, it simply output w ī (which should contain the īth bit of the message).Since our base code C 0 has constant relative distance, in order to "corrupt" the point ī in the tensor code without causing the lines that pass through ī to be inconsistent with the corrupted value at ī, one has to corrupt a constant fraction of each line on which ī resides.Thus, our decoder uses the scPCPPs to verify that a line that passes through ī is consistent with the value at ī, assuring that a constant fraction of many lines on which ī resides is corrupted.
Similarly, in order to "corrupt" a constant fraction of the line in the tensor codeword without causing inconsistency between the corrupted points in and the lines that pass through these corrupted points, one has to change a constant fraction of each line that passes through a corrupted point in (therefore, corrupting a constant fraction of each plane wherein the line resides).Thus, our decoder uses the scPCPPs to verify that the line that passes through a random point ī in (which is corrupted with probability Ω(1)) is consistent with the value at ī , assuring that a constant fraction of many planes on which line resides were corrupted.
Thus, if the īth point of the tensor codeword (i.e., the bit we wish to decode) is corrupted, then by iteratively continuing this procedure d times, and only performing d point-line consistency tests, the decoder can detect the corruption in ī with high probability, unless a large fraction of the codeword is corrupted (i.e., the corruption at a single point, ī, propagated to the entire tensor).
We remark that in the proof that C is a relaxed-LDC we do not use the strongness and canonicity properties of the scPCPPs (they are only used to prove that C is a strong-LTC).Furthermore, since in the following we only wish to present a decoder satisfies Condition 1 and 2 of Definition 4.2, we can allow the decoder to output a "don't-know" symbol whenever the codeword is corrupted.15Thus, we are not concerned with corruptions in the scPCPP parts, since a corruption in these parts can only increase the rejection probability for strings that are not codewords.Regarding inputs that are legal codewords, there are no corruptions and hence, no "inconsistencies".Thus, for legal codewords our tester will always output the correct value.

Warm-up: Two-Dimensional Tensors
Before we proceed to prove Theorem 4.1, we sketch a proof for two-dimensional tensor codes; that is, when we set d = 2 in the construction that appears in Section 3. In this warm-up, towards the end of simplifying the presentation, we make the following assumptions: We omit the third part of the codeword (i.e., the plane scPCPPs), and we omit the repetitions of the first and second parts of the code (i.e., the tensor code, and the point-line scPCPPs) and assume instead that the lengths of the first and the second parts are equal.We note that both assumptions can be easily removed (see Section 4.3 for details).
Let w = (c, p) be an alleged codeword that consists of two parts of equal length: (1) c, an alleged 2-dimensional tensor code C ⊗2 0 : {0, 1} k 2 → {0, 1} n 2 , and (2) p, a sequence of alleged scPCPPs for every pair of point ī in [n] 2 and line in C ⊗2 0 that passes through ī; each scPCPP ascertains that the line is a codeword of C 0 that is consistent with the value at the point ī.
Given a point ī = (i 1 , i 2 ) ∈ [k] 2 , the decoder first runs the point-line scPCPP that corresponds to ī and the line 1,ī = {(x, i 2 )} x∈[n] passing through ī in direction "1" (i.e., parallel to the first axis), and outputs ⊥ if the scPCPP verifier rejected.Otherwise, the decoder picks a random point ī on the line 1,ī , runs the corresponding scPCPP for ī and the line 2,ī = {(i 1 , x)} x∈[n] that passes through ī in direction "2", and output ⊥ if the scPCPP verifier rejected.If none of the scPCPP verifiers rejected, the verifier outputs c ī.
For the completeness condition, assume that the decoder is given a valid codeword.In this case, the first part is indeed a valid copy of C ⊗2 0 (x), and the second part consists of the canonical proofs for C ⊗2 0 (x).Hence, all of the scPCPP verifiers accept, and since C ⊗2 0 (x) ī = x ī, the decoder succeeds in decoding x ī.
For the (modified) relaxed soundness condition, assume that the decoder is given a corrupted codeword w = (c, p) that is δ-close to a valid codeword C ⊗2 0 (x), where δ ≤ δ radius for a sufficiently small (constant) decoding radius δ radius .Note that if c ī = x ī, then the decoder satisfies the soundness condition (since it always outputs either x i or ⊥); hence, we assume that c ī = x ī.In this case, when the decoder runs the scPCPP verifier for ī and (the restriction of c to) 1,ī it does not reject (with high probability) only if C| 1,ī is "close" to a codeword of C 0 that disagrees with c on ī (since the i 2 th bit of this codeword of C 0 must be different than x ī).Since C 0 is a code with constant relative distance, this implies that a constant fraction of the line 1,ī must be corrupted (i.e., the restriction of c to the line 1,ī is Ω(1)-far from its corresponding line in C(x)) for the scPCPP verifier to accept.Finally, if the decoder selected ī that is one of the Ω(n) corrupted points on 1,ī , then by the same argument, a constant fraction points on the restriction of c to the line 2,ī (that passes through ī ) must be corrupted.We deduce that in order to both scPCPP verifiers to accept (and hence defy the soundness condition), c must contain Ω(n 2 ) corrupted points, i.e., c should be β-far from C ⊗2 0 (x) for some constant β.By fixing δ radius < β, we prevent this possibility.

The General Case
We proceed with the full proof that C has a decoder that satisfies the first two conditions in the definition of a relaxed-LDC (i.e., the completeness and (modified) relaxed soundness conditions of Definition 4.2).We generalize the decoder of Section 4.2 to d-dimensional tensors and ensure it works without the assumptions that were made there for simplicity.The decoder D is formally described in Figure 1.
Let ī ∈ [k] d .The completeness of the decoder is immediate from the construction: If the input is a codeword, i.e., w = C (x) and all of the scPCPPs proofs are the canonical proofs for C (x) (i.e., plines and pplanes ), then all of the executions of the scPCPP verifiers accept (since the scPCPP verifiers are with one-sided error).Recalling that, by definition, C(x) ī = C ⊗d 0 (x) ī = x ī, the decoding procedure D w (ī) returns x ī with probability 1, as required.2. Initialize a set of points P 1 to contain the singleton ī; i.e., P 1 = {ī}.
3. For j = 1 until j = d: (a) Select uniformly at random a point ū = (u 1 , . . ., u d ) from the set P j .
(b) Verify that the j th -axis-parallel line passing through ū is a legal codeword of C 0 and that it is consistent with the value at c ū.That is, run the scPCPP verifier V s,cū , where s u j , with proof oracle p j,ū and input that consists of the j th -axis-parallel line passing through ū in c.In other words, we run V s,cū on input c| j, ū and proof p j,ū .
(c) If V rejects, output ⊥ and halt.
(d) If j < d, fix P j+1 to be a set of points in [n] d that reside on the j th -axis-parallel lines passing through the points in P j .That is, P j+1 = { j,z } z∈Pj , where j,z (defined in Definition 2.5) is the j th axis-parallel line passing through the point z.
4. Query c ī and return its value.Next, we prove the (modified) relaxed soundness of the decoder.Let w ∈ {0, 1} n be a corrupted codeword that is δ radius -close to a codeword C(x), where δ radius is a sufficiently small constant, to be determined later.We partition the analysis into three cases (Claims 4.3 and 4.4 and Lemma 4.5) that we analyze in the rest of this section.We begin with the following two simple claims.
The first claim shows that probability Ω(1), the random copy c in (c 1 , . . ., c t 1 ) that is chosen in Step 1 cannot be "too far" from the codeword C(x).
Claim 4.3.With probability at least 1/4, the random copy c is 4δ C (w)-close to C(x), where c is chosen uniformly at random from c.That is, Proof.Since |c| = plines = pplanes , then c = (c 1 , . . ., c t 1 ) is 3δ C (w)-close to C(x) t 1 .This means that the expected relative distance of a random c ∈ {c 1 , . . ., c t 1 } from C(x) is at most 3δ C (w).Hence, by Markov's inequality, c is 4δ C (w)-far from C(x) with probability at most 3/4.
Therefore, throughout the rest of the proof we fix a random copy c and assume that it is 4δ C (w)-close to C(x).This only costs us at most a constant factor in the success probability of the decoder.Having fixed c, recall that for ī ∈ [n] d , the notation c ī refers to the value of c at point ī.The next claim shows that if the bit we are trying to decode is not "corrupted" (in the random copy c), then the decoder D never outputs a mistake.
Proof.By the definition of the decoder (see Figure 1), regardless of the rest of the values in the input, D always outputs either c ī or ⊥.
The main part of the analysis takes place in the next lemma, where we assume that c ī = x ī and c is close to C(x), and prove that the decoder succeeds with constant probability, as required.Recall that δ C (w) < δ radius , where δ radius is a sufficiently small constant, to be determined later.
Lemma 4.5.Suppose that c is 4δ C (w)-close to C(x) and that c ī = x ī.Then, Proof.We say that a point ū ∈ [n] d in the tensor code c is corrupted if c ū = C(x) ū.Since we assume that c is corrupted in the point ī (which we wish to decode), by the definition of the decoder, the probability that D makes a mistake is equal to the probability that D reaches Step 4 and outputs c ī.
Recall that P j is the set of points that we consider in the j th iteration of the decoder.The set P 1 is the singleton that contains ī; i.e., P 1 = { ī } and for every j ∈ { 2, . . ., d + 1 } we recursively define P j as the set of all points that reside on the (j −1)-axis-parallel lines that pass through points in P j−1 (see Step 3d).Note that for every j ∈ [d] the cardinality of P j is equal to the number of points in a codeword of C ⊗j−1 0 ; that is, |P j | = n j−1 .Hence, the number of points in all lines that pass through points in P j (i.e., n j ) equals the number of points in a codeword of C ⊗j 0 .We will show that in order to corrupt c ī without being detected by the scPCPPs, one has to corrupt a constant fraction of a large portion of the lines that pass through points in P d , which in turn implies that one has to corrupt a constant fraction of the tensor code C, in contradiction to our assumption that δ C (w) < δ radius , for a sufficiently small constant δ radius .
Consider the first iteration of Step 3 (where j = 1).Denote by s i 1 the index of the bit that we wish to decode in the line c| 1,ī , and denote by b c ī the value of c at ī.
We verify that the line that passes through ī in the 1-direction is a codeword of C 0 that is consistent with the value of c at ī.This is done by running the verifier V s,b on input c| 1,ī and proof p 1,ī .Recall that the relative distance of C 0 (i.e., δ(C 0 )) is a constant.Since ī is corrupted (i.e., b = c ī = C(x) ī), if the line c| 1,ī is δ(C 0 )/2-close to the line C(x)| 1,ī (which is a codeword of C 0 that is inconsistent with c ī), then c| 1,ī is δ(C 0 )/2-far from any codeword y ∈ C 0 that is consistent with c ī (i.e., such that y s = C(x) ī).In this case, the verifier V s,b rejects c| 1,ī with probability at least poly (δ(C 0 )/2) = Ω(1) (regardless of the corresponding proof), as required.Hence, in the following we assume that the line c| 1,ī is δ(C 0 )/2-far from C(x)| 1,ī , and therefore P 2 contains a constant fraction of at least β 1 δ(C 0 )/2 corrupted points.
We proceed by induction.Consider the j th iteration, where 2 ≤ j ≤ d.We show that if the set of points that we consider in the j th iteration (the set P j ) contains a constant fraction of corrupted points, then either the decoder rejects with constant probability in the j th iteration, or P j+1 contains a constant fraction of corrupted points (we denote this probability by β j+1 ).
Claim 4.6.Let 2 ≤ j ≤ d and let 0 < β j ≤ 1 be a constant.If P j contains a at least a β j fraction of corrupted points, then either: 1.The decoder rejects with probability at least Ω(1) in the j th iteration; or, 2. P j+1 contains at least β j+1 fraction of corrupted points.
Proof of Claim 4.6.Consider the j th iteration of Step 3. The decoder selects uniformly at random a point ū = (u 1 , . . ., u d ) ∈ P j .Denote by s = u j the index of the bit that we wish to decode on the line c| j,ū (which passes through ū in the j th -direction), and denote by b c ū the value of c at ū.By the hypothesis, ū is corrupted with probability at least β j .
Next, the verifier V s,b is executed on input c| j,ū and proof p j,ū .Observe that if a fraction of at most β j /2 of the j-axis-parallel lines that pass through points in P j (i.e., c| j,z z∈P j ) are δ(C 0 )/2far (each) from their corresponding lines in C(x), then the decoder outputs ⊥ with probability at least β j /2 • poly (δ(C 0 )/2) = Ω(1), as required.This is because in this case, with probability at least β j /2, we hit a line that is δ(C 0 )/2-close to its corresponding line in C(x) (but the value of this line in u j differs from C(x) ū).As in the first iteration, this implies that this line is δ(C 0 )/2-far from any codeword y ∈ C 0 such that y s = C(x) ū, and hence the verifier V s,b rejects c| j,ū with probability at least poly (δ(C 0 )/2) (regardless of the corresponding proof).
Otherwise (i.e., if the above case does not hold), at least β j /2 of the lines in c| j,z z∈P j are δ(C 0 )/2-far (each) from their corresponding lines in C(x).Therefore, P j+1 contains at least a fraction of corrupted points.
Note that P d+1 is the set of all points in [n] d .By solving the recurrence relation, we get that 16 Recall that according to the hypothesis of the lemma, c is 4δ radius -close to C(x).Fix the decoding radius δ radius to a sufficiently small constant such that 4δ radius < β d+1 .Thus, Claim 4.6 implies that in one of the iterations the decoder must reject with probability at least Ω(1), as required.
Remarks.The codewords of C are of the form w = (c, plines , pplanes ), where the three parts are of equal length.The fact that the length of each of the three parts is proportional to the others is critical.The length of c must be proportional to the length of w in order for our code to have constant relative distance (recall that there is no guarantee on the distance of the scPCPPs).Moreover, the length of each of the scPCPP parts, c and plines , should be proportional to the length of w in order to obtain the average smoothness requirement (see Section 4.4).
We remark that we chose our tensor code to be systematic only for the sake of convenience.Instead, we could have added the message itself (repeated to obtain the proper length) as a fourth part to the code C . 17.
Next, we note that for the proof that our code C is a relaxed-LDC we only use the pointline scPCPPs and ignore the plane scPCPPs (i.e., the third part of w).Furthermore, we do not use the fact that the point-line scPCPPs are neither strong nor canonical.That is, to get only a relaxed-LDC with nearly-linear length it is enough to augment a good systematic tensor code (i.e., a tensor product of a systematic linear code with constant rate and constant relative distance) with 16 Recall that the fraction of corrupted points in P2 is at least δ(C0)/2, and that for 2 ≤ j ≤ d the fraction of corrupted points in Pj+1 (which we denote by βj+1) is at least . 17 Actually, this approach (of adding the message itself to the output of the code) was taken in previous constructions of relaxed-LDC (see [BGH + 06, GR13]).By using a systematic tensor code, we circumvented this unnecessary complication.
a "regular" PCPP.However, the plane scPCPPs and the strongness and canonicity of the PCPPs will be heavily used in the proof that C is also a strong-LTC (see Section 5).

Obtaining Average Smoothness
In this subsection, we conclude the proof that C : {0, 1} k → {0, 1} n is a relaxed-LDC.Recall that in Section 4.3 we showed a decoder D for C (described in Figure 1) that satisfies the first two conditions of Definition 4.2, i.e., the completeness and (modified) relaxed soundness conditions.Next, we show that D can be modified such that it also satisfies the third and final condition of Definition 4.2, i.e., the average smoothness condition (which, roughly speaking, requires that the decoder makes nearly-uniform queries on average).
Denote by D w (i, j, r) the j th query of the decoder D on coordinate i ∈ [k ], coin tosses r, and input oracle w.Recall that D satisfies the average smoothness condition if for every w ∈ {0, 1} n and v ∈ [n ], it holds that Pr i,j,r where the probability is taken uniformly over all possible choices of i ∈ [k ], j ∈ [q] (where q is the number of queries that D makes), and coin tosses r.Firstly, we can relax the condition in Equation (4.1) and replace it with the condition Pr i,j,r To see this, note that if the decoder D (which makes q = O(1) queries) satisfies Equation (4.2), then we can obtain a decoder D that makes q = O(q) queries and satisfy Equation (4.1) simply by running D and adding O(q) uniformly distributed "dummy" queries (whose answers the decoder ignores).Secondly, note that by the construction of D (of Figure 1), each of the scPCPPs verifiers that are being emulated by D makes nearly-uniform queries (see Theorems 3.1 and 3.2) to the statement it refers to and to its corresponding proof.Observe that on a random index ū ∈ [k] d the decoder D invokes the verifier of the point-line scPCPP on uniformly selected lines in a uniformly selected copy of the tensor code.Since the length of the first and second part of each codeword of C (i.e., the tensor code and the point-line scPCPPs) constitutes a constant fraction of the length of each codeword of C , the decoder D satisfies Equation (4.2).Finally, by the foregoing discussion, D can be modified to satisfy Equation (4.1).

Establishing the Strong-LTC Property
In this section we prove that the code C , which was defined in Section 3, is a strong locally testable code.
In order to prove Theorem 5.1 we need to present a tester T that is given an oracle access to w ∈ {0, 1} n , makes O(1) queries to w, and satisfies the following: For all w ∈ C it holds that T w = 1, and for all w ∈ C it holds that Pr T [T w = 0] ≥ poly (δ C (w)).

Outline of the Tester and its Analysis
Recall that each codeword of C consists of three parts: (1) an alleged d-dimensional tensor code C = C ⊗d 0 : {0, 1} k d → {0, 1} n d , (2) alleged scPCPPs for every 2-dimensional plane in C; each scPCPP ascertains that the given plane is consistent with C, and (3) alleged scPCPPs for every pair of point ī in C and line in C that passes through ī; each scPCPP ascertains that a line is a codeword of C 0 that is consistent with the value at a point ī.
For the simplicity of the exposition, we omit the repetitions of the three parts of the code (i.e., the tensor code, the point-line scPCPPs, and the plane scPCPPs) and assume instead that the length of the each part is equal.We note that this assumption can be easily removed by using an additional consistency test.See the full details in Section 5.2.
The key idea is that by the robustness property of tensor codes, the corruption rate of a codeword is proportional to the corruption rate of a random plane in the codeword.Hence, in order to ensure that the tensor code part of C is valid, our tester use the plane scPCPPs to ascertain that a random plane is close to being valid.We note that for the tester, we do not need the point-line scPCPPs (which we only need for the decoder); however, since we need to ensure that also the point-line scPCPPs part is not corrupted, our tester also verifies a random point-line scPCPPs.
Clearly, this tester always accepts valid codewords.To analyze what happens with noncodewords consider a string that is somewhat far from C .In this case, one of the following three cases must hold: 1.The tensor code part is far from a legal codeword of C ⊗d .
2. The tensor code part is close to a legal codeword of C ⊗d but the plane scPCPP proofs part is far from the corresponding canonical proofs.
3. The tensor code part is close to a legal codeword of C ⊗d but the point-line scPCPP proofs part is far from the corresponding canonical proofs.
To ensure that in the first case the tester succeeds (i.e., rejects with sufficiently high probability), it is enough to test that a random plane in c is close to a codeword of C ⊗2 .To accomplish this, we choose uniformly at random a (2-dimensional, axis-parallel) plane and run the corresponding plane scPCPP verifier.This suffices, since Theorem 2.7 asserts that if a tensor c is far from a legal codeword of C ⊗d , then a random (2-dimensional, axis-parallel) plane in c must also be far from a legal codeword of C ⊗2 .
The second and third cases are similar, and so, we only sketch how to handle the second case.Assume that the tensor is close to a codeword but the plane scPCPPs are far from the corresponding canonical proofs.From this assumption we can deduce that there are many planes that are close to legal codewords of C ⊗2 , but whose corresponding scPCPPs are far from the canonical proofs.Thus, choosing a random plane and running the corresponding plane scPCPP verifier ensures that the tester rejects with a sufficiently high probability.This is due to the strongness and canonicity features of our scPCPPs.
To conclude, the tester consists of three parts: (1) a repetition test, wherein we verify the repetition structure of the tensor, (2) plane scPCPP consistency test, wherein we verify that a random plane in the tensor is a legal codeword; this test ensures that both the tensor code part consists of valid codewords and its plane scPCPPs are the corresponding canonical proofs, and (3) point-line scPCPP consistency test, which we perform only to verify that the point-line scPCPPs consists of the canonical proofs that corresponds to the tensor part of the code.

The Full Proof
We proceed with the full proof of Theorem 5.1, which formalizes the intuition given in the previous section.We show a strong-LTC procedure for C .The tester T is formally described in Figure 2.
The strong-LTC Procedure for C Input: oracle access to a string w = (c, plines , pplanes ).
For s ∈ [n] and b ∈ {0, 1} let V line (s, b) be a scPCPP verifier that refers to an input of the form z ∈ {0, 1} n and asserts that there exists y ∈ C 0 such that z = y and z s = b.
Let V plane be a scPCPP verifier that refers to an input of the form z ∈ {0, 1} n 2 and asserts that there exists Choose a random copy of each of the three replicated parts of w.That is, choose uniformly at random a copy c in c, a copy pline = {p j,ī } {ī∈[n] d , j∈[d]} in plines , and a copy pplane = {p p } {p∈Planes} in pplanes .

Accept if none of the following tests reject:
1.The repetition test: We query two random copies from the tensor part of w and check if they agree on a random location.More accurately, we select uniformly at random r, r ∈ [t 1 ] and reject if and only if c r and c r disagree on a random coordinate.
2. The plane scPCPP consistency test: Choose a uniformly at random a plane p ∈ Planes.Reject if the verifier V plane rejects on the plane p (i.e., input c| p ) and the proof p p .
3. The point-line scPCPP consistency test: Choose uniformly at random a coordinate ū = (u 1 , . . ., u d ) ∈ [n] d and a direction j ∈ [d] in c.Reject if the verifier V line (u j , c ū) rejects on the line passing through ū in direction j and the proof p j,ū .In other words, we reject if V line (u j , c ū) rejects on input c| j, ū and proof p j,ū .Consider an arbitrary input w ∈ {0, 1} n such that δ C (w) ≥ 0. We view w as a string composed of three parts as in Section 3, i.e., w = (c, plines , pplanes ).The completeness of the tester is immediate: Indeed, if the input is a codeword, i.e., w = C (x), then the first part of w consists of identical copies of a tensor code, and hence the codeword repetition test accepts with probability 1.Similarly, the second and third parts consists of the canonical point-line and plane scPCPP proofs for the aforementioned tensor code, respectively; hence the (one-sided error) scPCPP verifiers will accept with probability 1.
Next, we prove the soundness of the tester.We partition the analysis into three cases (Claim 5.2 and Lemmas 5.3 and 5.4), which we analyze in the rest of this section.
Let ĉ ∈ {0, 1} n d be a tensor that is closest on average to the tensors in c, i.e., a string that minimizes ∆(c, ĉt 1 ) = t 1 i=1 ∆(c i , ĉ).The first (and standard) claim shows that if c is far from consisting of t 1 identical tensors, then the repetition test (of Step 1) rejects with high probability.Let γ be a constant set to δ(C)/(24d) (for the purpose of Lemma 5.4).
Proof.Suppose that δ(c, ĉt 1 ) ≥ γ 5 • δ C (w).The codeword repetition test rejects with probability at least The following lemma shows that if c consists of t 1 nearly identical tensors that are far from a codeword of C, then due to the robustness feature of tensor codes, a random plane in a random copy in c will be far from valid, and hence, Step 2 of the tester rejects with high probability.
Proof.Observe that a random copy c of a tensor code in c is Ω δ C (w) -far from C with high probability.This is because Next, by the robustness feature of tensor codes, we deduce that if the randomly selected tensor code c is Ω δ C (w) -far from being valid, then a random plane of c is also Ω δ C (w) -far from being valid.Specifically, by Theorem 2.7, there exists a constant c robust ∈ (0, 1) such that for every tensor w ∈ {0, 1} n d we have Hence, by an averaging argument, (5.1) Note that, by Equation (5.1), with probability Ω δ C (w) we select a plane that is Ω δ C (w) -far from a codeword of C ⊗2 0 .Given such plane, the scPCPP verifier V plane rejects with probability Ω δ C (w) .Thus, the tester T rejects with probability poly(δ C (w)) over the internal randomness of T .
In the next lemma, we complete the analysis by assuming that c is sufficiently close to a codeword of C t 1 , and showing that in this case most of the "corruption" takes place in the parts of the scPCPP proofs, and hence the scPCPP consistency tests will reject with high probability.
2 .Therefore, our assumption that c is γ • δ C (w)-close to being a codeword of C t 1 implies that there exists a unique codeword c of C that minimizes the distance of c (c ) t 1 from c. Let w be the codeword of C that consists of repetitions of the tensor code c and its canonical scPCPP proofs; that is, Let w = c , (π lines (c )) t 2 , (π planes (c )) t 3 be a codeword of C .Denote by x the inverse of w (i.e., , w = C (x)).
It is convenient to introduce notations for the fraction of corruptions in each part of C .Towards this end, denote the fraction of errors in the first part of the code (the copies of the tensor code) their corresponding proofs are Ω δ C (w) -corrupted; that is, a fraction of at least δp 4 of the axisparallel planes p in c are δ )), and in addition, their corresponding (alleged) plane scPCPP proofs in {p p } p∈Planes are δ p/2-far from their (correct) canonical proofs in π planes (x).Denote the set of planes that satisfy the foregoing condition by BAD.
Observe that for every plane p ∈ BAD, in order for input c| p and proof p p to be a valid claim (for the input-proof language that V plane verifies), one must make at least one of the following changes: (1) change a fraction of at least δp 2 of the proof p p such that it matches π plane (C(x)| p ), or (2) change a fraction of at least δ(C ⊗2 0 )/2 of c| p (since p p might be a valid proof for input C ⊗2 0 (y) = c| p ).Thus, for every p ∈ BAD, the probability that V plane rejects input c| p and proof p p is at least polynomial in δ C (w).
Putting it all together, with probability 2/3 we hit a random copy c of the tensor code that is 3δ c-close to C(x).Furthermore, with probability at least δ p we hit a random copy p that is δ p-corrupted, and subsequently, with probability δ p/2 we hit a plane scPCPP proof that is δ p/2corrupted.Finally, assuming the foregoing, the scPCPP verifier V plane rejects with probability poly (δ C (w)).Therefore, This concludes the proof of Lemma 5.4.

Strong Canonical PCPs of Proximity
In this section we construct scPCPPs with polynomial proof length for any good linear code (see Theorem 3.1) and for any half-space of a any good linear code (see Theorem 3.2).Our starting point (see Corollary 6.2) is the following result of [GR13],18 which in turn builds upon [GS06, Section 5.2]: For any good code C : {0, 1} k → {0, 1} ck , there exists a strong-LTC C : {0, 1} k → {0, 1} poly(k) such that the first half of C (x) consists of c blocks, each depending only on a k-bit long block of C(x).Using this result, we construct a scPCPP for any good code C, where this construction applies the above result to several auxiliary codes that are derived from C.

scPCPPs for Good Codes
We start by recalling the statement of Theorem 3.1.
The main technical tool upon which we rely (when proving Theorem 3.1) is the linear inner proof systems (hereafter, LIPS), constructed by Goldreich and Sudan.Loosely speaking, the LIPS mechanism allows to transform linear strong locally testable codes over a large alphabet into strong locally testable codes over a smaller alphabet (see [GS06, Section 5.2]).We encapsulate our usage of the LIPS mechanism in the following theorem, which generalizes [GS06, Theorem 5.20] and [GS06, Proposition 5.21].Throughout this section, denote F = GF(2).Theorem 6.1.Let Σ = F b .For infinitely many k, there exists n = poly(k) and a linear code E : Σ → F n with constant relative distance such that the following holds.Suppose that C : Σ K → Σ N is a strong-LTC that is linear over F and has a (non-adaptive) tester that uses r random bits and makes nearly-uniform queries.Then, there exists = poly(k) such that is a multiple of n, and a linear strong-LTC C : Moreover, the tester of C makes nearly-uniform queries.
As a corollary of Theorem 6.1, we obtain that any good linear code can be augmented to a linear strong-LTC with polynomial length, such that the prefix of the new code is closely related to that of the original code (but is not equal to the original code).This is done by viewing the good linear code as a trivial strong-LTC over a sufficiently large alphabet.Corollary 6.2 (our starting point).Let C : {0, 1} k → {0, 1} ck be a good linear code with constant relative distance, where c ∈ N is a constant.Then, for some M, m = poly(k), there exists a linear strong-LTC C : {0, 1} k → {0, 1} 2M and a linear code E : {0, 1} k → {0, 1} m , which has constant relative distance, such that the M -bit long prefix of where C(x)[i] is the i th block of length k in C(x).Furthermore, the (strong) tester of C makes nearly-uniform queries.
We remark that Theorem 6.1 and Corollary 6.2 are straightforward generalization of [GR13, Theorem B.2] and [GR13, Corollary B.3] (respectively), and we defer their proofs to Appendix A.
The Plan.Let C : {0, 1} k → {0, 1} ck be a good linear code, where c ∈ N is a constant.We construct a strong-LTC C such that a constant fraction of each codeword C (x) contains copies of C(x).This, in turn, implies a scPCPP for C (see Proposition 6.5).Note that by applying Corollary 6.2 to C we obtain a strong-LTC C such that a constant fraction of each codeword C (x) contains copies of E(C(x)[1]), . . ., E(C(x)[c]) , but not of C(x).This does not seem to suffice for obtaining a scPCPP, and so we use a different approach.
We start by using Corollary 6.2 to obtain a family of linear strong-LTCs , where n = poly(k), with constant relative distance such that the prefix of each codeword C i (x) contains a linear number of copies of the i th -bit of C(x) (as well as other structural features that will be useful for us).This is done via the next lemma, which uses techniques from [GR13].Lemma 6.3 (obtaining auxiliary codes C i ).Let C : {0, 1} k → {0, 1} ck be a good linear code, where c ∈ N is a constant.There exist a constant α ∈ (0, 1), a polynomial value n = poly(k), and a linear code Ĉ : {0, 1} k → {0, 1} cn with constant relative distance, which satisfy the following: For every i ∈ [ck], there exists a function is a linear strong-LTC with constant relative distance.Moreover, for every i ∈ [ck] the (strong) tester of C i makes nearly-uniform queries.
We stress that the code Ĉ (which is common to all C i 's) is independent of i and constitutes a constant fraction of the length of each C i .
Proof of Lemma 6.3.For every j ∈ [c], we denote by C(x)[j] the j th block of length k of C(x).For every i ∈ [ck], consider the code C i : {0, 1} k → {0, 1} (c+1)k given by Note that C i is a good linear code.
For every i ∈ [ck], we apply Corollary 6.2 to C i and obtain a linear strong-LTC C i : {0, 1} k → {0, 1} 2(c+1)•n with constant relative distance, which is (up to a permutation of its bit locations) of the form where m, n = poly(k), the function E : {0, 1} k → {0, 1} m is a linear code with constant relative distance, t = n/m, and π i (x) ∈ {0, 1} (c+1)n is some string.Moreover, the (strong) tester of C i makes nearly-uniform queries.
Denote by Ĉ : {0, 1} k → {0, 1} cn the linear code (with constant relative distance) that is given by . Since E is a linear code with constant relative distance, then E(0 k ) = 0 m and ∆ E 1 k , 0 m ≥ αm for some constant α ∈ (0, 1).Now, for every i ∈ [ck], consider the code , which is obtained from C i by simply removing coordinates on which E(0 k ) and E(1 k ) agree, in each of the t copies in the first part (i.e., E C(x) i k ).
Note that C i has constant relative distance.Furthermore, since C i is linear and since we only removed coordinates on which the value is 0, the code C i is also a linear code.Finally, by emulating the execution of the tester of C i on an (alleged) codeword of C i (which can be done by returning 0 whenever a coordinate that was omitted is being queried), we obtain that C i (x), which is of the required form of the hypothesis, is a strong-LTC with a (strong) tester that makes nearly-uniform queries.
In the actual proof of Theorem 3.1, we will construct a code C that encodes a message x by concatenating the encodings of x by all of the strong-LTCs in C 1 (x), . . ., C ck (x) ).Thus, we will obtain a strong-LTC that (up to a permutation of the bit locations) contains copies of the entire codeword C(x) in its prefix.We remark that, in general, the concatenation of strong-LTCs is not a strong-LTC.However, the structure of the aforementioned family of codes (specifically, the fact that all codes in the family contains a common sub-code) implies that the concatenation of codes in The next proposition shows a sufficient condition for obtaining strong-LTCs via concatenation of strong-LTCs.Proposition 6.4 (concatenating multiple encodings of strong-LTCs with a common sub-code).
C 1 (x), . . ., C t (x) is a strong-LTC with constant relative distance.Moreover, if the (strong) testers of C 1 , . . ., C t make nearly-uniform queries, then the (strong) tester of C also makes nearly-uniform queries Proposition 6.4 follows by using a tester that (1) emulates the strong-LTC tester of a randomly selected concatenated code C i (to ascertain that each concatenated codeword is valid), and (2) tests the consistency of the common code Ĉ in two randomly selected concatenated codes (to assure that all of the concatenated codewords encode the same message).The analysis is quite straightforward and is deferred to Appendix B.
The last tool we shall need in order to prove Theorem 3.1 is the following proposition, which allows us to transform strong-LTCs to scPCPPs for prefixes of the strong-LTCs' codewords.Proposition 6.5 (from strong-LTCs to scPCPPs for related codewords).Let C : {0, 1} k → {0, 1} n be a linear code, and let C : {0, 1} k → {0, 1} n be a linear strong-LTC.If there exists I ⊆ [n ] where |I|/n , then there exists a scPCPP for C (i.e., for the set of codewords {C(x)} x∈{0,1} k ) with proof length O(n ).Moreover, the canonical scPCPP proofs are linear, and if the (strong) tester of C makes nearly-uniform queries, then the verifier of the scPCPP for C also makes nearly-uniform queries.
Proof.Let C, C , and I be as in the hypothesis.Assume, without loss of generality, that I = {1, . . ., |I|}.Denote the (strong) tester of C by T .We use T in a black-box manner in order to construct a scPCPP for the set {C(x)} x∈{0,1} k .Given a codeword C(x), the canonical scPCPP proof for C(x) is given by π to the coordinates outside of I. Let V be the scPCPP verifier that gets oracle access to an alleged codeword w ∈ {0, 1} n and oracle access to a proof oracle p of length n − |I|.Let t = |I| /n.The verifier V emulates the execution of T on (w t , p) as follows: Each query that T makes to the first part (which are allegedly C(x) t ) is simulated by a corresponding query to the input oracle w, 19 and each query that T makes to the other coordinates (which is allegedly π(x)) is simulated by a corresponding query to the proof oracle.The verifier V accepts if and only if the emulated run of T on (w t , p) accepted.Note that if T makes nearly-uniform queries, then V also makes nearly-uniform queries.
The completeness of V is immediate: If w is a codeword C(x) and p = π(x), then w t , p is a codeword of C .We conclude the proof by showing the soundness of V .Note that V gets as input a pair of an alleged codeword w and an alleged canonical proof p. Suppose that δ PCPP (w, p) min x∈{0,1} n max δ(x, w) ; δ(π canonical (x), p) > 0.
Using Lemma 6.3 and Propositions 6.4 and 6.5, we proceed with the proof of Theorem 3.1.
Proof of Theorem 3.1.Let c ∈ N be a constant and C : {0, 1} k → {0, 1} ck be a linear code with constant relative distance.We show a scPCPP, with polynomial proof length, for the language of all codewords of C.
First, we apply Lemma 6.3 on C and get that there exists a linear code Ĉ : {0, 1} k → {0, 1} cn with constant relative distance and a set of codes that each C i is a linear code with constant relative distance that is given by where α ∈ (0, 1), n = poly(k) and π i : {0, 1} k → {0, 1} (c+1)n .Moreover, each C i makes nearlyuniform queries.Next, we consider the code C (x) C 1 (x), . . ., C ck (x) .Observe that, up to a permutation of the indices, C has the form where π(x) = π 1 (x), . . ., π ck (x).Note that Ĉ(x) ck = ck • cn, which is a constant fraction of |C (x)|.By Proposition 6.4, the code C is a strong-LTC with constant relative distance that makes nearly-uniform queries.
Finally, the theorem follows by applying Proposition 6.5 to the code C with I = [αn • ck], where the code C is repeated αn = |I|/(ck) times.(Indeed, we use the fact that |I| is a constant fraction of |C (x)|.)Note that the scPCPP proof we obtain (namely, Ĉ(x) ck , π(x) ) is of length poly(k).

scPCPPs for Half-Spaces of Good Codes
We start by recalling the statement of Theorem 3.2.Theorem 3.2 (restated).Let C : {0, 1} k → {0, 1} n be a linear code with constant relative distance and linear length.Let i ∈ [k] be a location in a message and b ∈ {0, 1} a bit.Then, there exists a scPCPP for C i,b , where C i,b is the set of all codewords w of C such that the i th -bit of w equals b (i.e., w i = b).Furthermore, the proof length of the scPCPP is poly(n), the scPCPP verifier makes nearly-uniform queries, and the scPCPP proofs are linear (over GF(2)).
Theorem 3.2 is obtained by using Theorem 3.1 in a black-box manner.Specifically, note that in case b = 0, the code C i,0 is linear, and thus we can apply Theorem 3.1 directly.On the other hand, in case b = 1, the code C i,1 is not linear, but we can "shift" it (by a fixed codeword of C i,1 ) and apply Theorem 3.1.
Proof of Theorem 3.2.In light of the above, we focus on the case in which b = 1.Assume, without loss of generality, that there exists a codeword c (i) of C such that that the i th -bit of c (i) is 1 (otherwise, we can always reject).Consider a verifier, V i,1 , that gets oracle access to an input string w and a proof π, and proceeds as follows.The verifier V i,1 emulates the execution of V i,0 (obtained via Theorem 3.1) on input oracle w + c (i) (where the summation is point-wise over GF(2)) and its proof oracle π (which should be the canonical proof for w+c (i) ∈ C i,0 ).Note that the verifier V i,0 makes nearly-uniform queries, and so V i,1 also makes nearly-uniform queries.We show that V i,1 is a scPCPP for C i,1 The completeness is immediate: Recall that if w is a codeword of C i,1 , then w = C(x) such that w i = 1.By the linearity of C, w + c (i) is a codeword of C such that its i th bit is 0 (i.e., w + c (i) i = 0).Therefore, we actually invoke V i,0 on a codeword of C i,0 .For the soundness condition, assume that δ C i,1 (w) > 0. Observe that Therefore, the verifier V i,1 will reject the input w + c (i) (given the corresponding canonical proof) with probability at least poly δ C i,1 (w) , as required.

Application to Property Testing
In this section we give an application of our main result (Theorem 1.1) to the area of property testing.Specifically, we improve on the best known separation result, due to Gur and Rothblum [GR13], between the complexity of decision versus verification in the property testing model.Details follow.
The study of property testing, initiated by Rubinfeld and Sudan [RS96] and Goldreich, Goldwasser and Ron [GGR98], considers highly-efficient randomized algorithms that solve approximate decision problems, while only inspecting a small fraction of the input.Such algorithms, commonly referred to as testers, are given oracle access to some object, and are required to determine whether the object has some predetermined property or is far (say, in Hamming distance) from every object that has the property.
Remarkably, it turns out that many natural properties can be tested by making relatively few queries to the object.However, there are also many natural properties that no tester can test efficiently.In fact, "almost all" properties require a very large query complexity to be tested.Motivated by this limitation, Gur and Rothblum [GR13] initiated the study of MA proofs of proximity (hereafter MAPs), which can be viewed as the NP proof-system analogue of property testing.
Loosely speaking, an MAP is a probabilistic proof system that augments the property testing framework by allowing the tester full and free access to an (alleged) proof.That is, such a proofaided tester for a property Π is given oracle access to an input x and free access to a proof string w, and should distinguish between the case that x ∈ Π and the case that x is far from Π, while only making a sublinear number of queries.More precisely, given a proximity parameter ε > 0, we require that for inputs x ∈ Π, there exist a proof that the tester accepts with high probability, and for inputs x that are ε-far from Π no proof will make the tester accept, except with some small probability of error.For formal definitions we refer to [GR13, Section 2].
As observed by [GR13], given an MAP proof of length that is linear in the size of the object (specifically, a proof that fully describes the object), every property can be tested by only making O(1/ε) queries to the object, simply by verifying the proof's consistency with the object.Hence, it is natural to measure the complexity of an MAP by both the length of the proof and the number of queries made in order to decide whether x ∈ Π or ε-far from it.We note that a property tester can be viewed as an MAP that uses a proof of length 0.
Gur and Rothblum [GR13] showed that the task of separating the power of property testers and MAPs can be reduced to the task of designing a code that is both locally testable and locally decodable.Furthermore, they noticed that for such a separation, relaxed decodability suffices.Unable to construct a code as in Theorem 1.1, Gur and Rothblum used several weaker codes to obtain partial separation results.Specifically, they proved the following theorem.
Theorem 7.1 (Theorems 3.1, 3.2 and 3.3 in [GR13]).In all items, n denotes the length of the main input being tested.
1.For every constant α > 0, there exists a property Π α that has an MAP that uses a proof of length O(log n) and makes poly(1/ε) queries for every ε > 1/polylog(n), but for which every property tester must make Ω(n 1−α ) queries.
2. For every constant α > 0, there exists a property Π α that has an MAP that uses a proof of length O(log n) and makes poly(log n, 1/ε) queries, but for which every property tester must make Ω(n 1−α ) queries.
3. There exists a universal constant c ∈ (0, 1) and a property Π that has an MAP that uses a proof of length O(log n) and makes poly(1/ε) queries (without limitation on ε), but for which every property tester must make n c queries.
Furthermore, each of the above MAPs has one-sided error.
Note that each of these separation results has a drawback: The first separation works only for sufficiently large values of the proximity parameter, the second separation has non-constant query complexity for the MAPs, and the third separation does not require property testers to make nearly-linear number of queries.
Plugging in the code C from Theorem 1.1 into the framework developed by [GR13, Lemmas 3.4 and 3.5], we achieve the best of all the aforementioned results; that is, a separation for all values of the proximity parameter, with constant query complexity for the MAPs, and nearly-linear query complexity for testers.Formally, we obtain the following separation result between MAPs and property testers.
Theorem 1.3 (Restated).For every constant α > 0, there a property Π α that has an MAP that uses a proof of length O(log n) and makes poly(1/ε) queries (without limitation on ε), but for which every property tester must make n 1−α queries.Furthermore, the MAP has one-sided error.

A Obtaining Strong LTCs from LIPS
In this appendix, we provide tools that allow us to use the linear inner proof systems (hereafter, LIPS), constructed by Goldreich and Sudan [GS06], to obtain families of strong-LTCs with several features that we take advantage of in Section 6.Specifically, we prove Theorem 6.1 and Corollary 6.2.Throughout this section, denote F = GF(2).Recall the statement of Theorem 6.1.Theorem 6.1 (restated).Let Σ = F b .For infinitely many k, there exists n = poly(k) and a linear code E : Σ → F n such that the following holds.Suppose that C : Σ K → Σ N is a strong-LTC that is linear over F and has a (non-adaptive) tester that uses r random bits and makes nearly-uniform queries.Then, there exists = poly(k) such that is a multiple of n, and a linear strong-LTC C : Moreover, the tester of C makes nearly-uniform queries.
Proof.We follow the proof of [GS06, Theorem 5.20], while using the code C of the theorem's hypothesis instead of the third ingredient in that proof.In addition, following [GS06, Proposition 5.21], we use composition theorems (i.e., [GS06, Theorem 5.15] and [GS06, Theorem 5.17]) that preserve the nearly-uniform distribution of the queries the verifiers (or tester) make, thus ascertaining that C (x) has a tester that queries each location with probability Θ(1/N ).We note that in our settings, the overhead of replacing the "vanilla" composition theorems (which are used in [GS06, Theorem 5.20]) with the composition theorems that preserve the nearly-uniform queries is insignificant.Details follow.
In the following description, all references refer to [GS06].Recall some basics regarding the terminology used in [GS06].By Definitions 5.8 and 5.9, a F, (q, b) → (p, a), δ, γ -LIPS refers to input oracles X 1 , ..., X q : [n] → F a and a proof oracle X q+1 : [ ] → F a , where the input oracles provide an n-long encoding (over F a ) of a single symbol in the (much) bigger alphabet F b (i.e., this encoding is denoted E : F b → (F a ) n ).(In addition δ is the relative distance of the encoding used, and γ is the detection ratio in strong soundness.In the following, both parameters will be small constants.) The proof of Theorem 5.20 starts with an overview (page 79), and then lists three ingredients (page 80) that will be used: (1) The Hadamard based F, (p H , k H ) → (p H + 5, 1), 1/2, 1/8 -LIPS (for any choice of p H and k H ) of Proposition 5.18, (2) The Reed-Muller based F, (p RM , k RM ) → (p RM +4, poly(log p RM k RM )), 1/2, Ω(1) -LIPS (for any choice of p RM and k RM ) of Proposition 5.18, and (3) a specific strong-LTC (namely, the strong-LTC in Part 1 of Theorem 2.4).We shall use the very same first two ingredients,20 but use the code C in place of the third.Assume, without loss of generality, that the randomness complexity r of the strong (tester) of C satisfies that 2 r is a multiple of N .(We remark that all three ingredients have verifiers or testers that make nearlyuniform queries, and that we compose these ingredients via the composition theorems that preserve this distribution of queries.)Specifically, the second paragraph following the ingredients-list asserts that for any desired p and k , an F, (p , k ) → (p +13, 1), Ω(1), Ω(1/p ) 2 -LIPS with randomness O(p log k ), input length poly(p k ), and proof length that are poly(p k ).We shall use p = O(1) and k = b, where the O(1) stands for the query complexity of the codeword tester for C. Thus the above simplifies to asserting an F, (O(1), b) → (O(1), 1), Ω(1), Ω(1) -LIPS with randomness O(log b) and input/proof lengths (i.e., n and ) that are poly(b).Without loss of generality, we may assume that is a multiple of n.
Next, we wish to compose C with the above LIPS via Theorem 5.15 (instead of via Theorem 5.13, which does not preserve the nearly-uniform distribution of the queries).It follows that in Item 1 of Theorem 5.15 we use K, N and r as provided by the hypothesis and q = O(1).For Item 2, we use b as provided by the hypothesis, (q = O(1) as above), p = O(1) and a = 1, and n, = poly(b) (all fitting the LIPS above).So we have Γ = F , and get a strong-LTC mapping F bK to F 2 r+1 • , which makes nearly-uniform queries.In particular, for t = 2 r /N n (i.e., tN n = 2 r ), as shown on top of page 56 (see Equation ( 32)), the first half of the codewords of the resulting code have the form E(C(x) 1 )), ..., E(C(x) N ) t , where x ∈ F bK is viewed as an element of Σ K .The theorem follows.
Next, recall the statement of Corollary 6.2.Proof.Let C : F k → F ck be a good linear code.Viewing C as a mapping from Σ = F k to Σ c , note that C is a strong-LTC, which is (trivially) checked by reading all c symbols (and hence, by definition, it makes uniform queries).The claim follows by instantiating Theorem 6.1 using the code C and taking b = k, K = 1, N = c = O(1), and r = 0.

B Concatenating Multiple Encodings of Strong LTCs
In this appendix, we show a sufficient condition for obtaining strong-LTCs via concatenation of strong-LTCs.Recall the statement of Proposition 6.4.C 1 (x), . . ., C t (x) is a strong-LTC with constant relative distance.Moreover, if the (strong) testers of C 1 , . . ., C t make nearly-uniform queries, then the (strong) tester of C also makes nearly-uniform queries Proof.Let |I| = α • n for constant 0 ≤ α ≤ 1. Assume, without loss of generality, that I = {1, . . ., α • n}.For every i ∈ [t], we refer to an alleged (n-bit) codeword C i (x) as the pair of strings (y i , z i ) ∈ {0, 1} α•n × {0, 1} (1−α)•n , so that y i is the common codeword C(x) and z i is the rest of the codeword.
We show a (strong) tester that, given oracle access to a binary string w = (y 1 , z 1 ), . . ., (y t , z t ) , where (y i , z i ) ∈ {0, 1} n for every i ∈ [t], accepts every codeword of C and rejects non-codewords of C with probability that is polynomial in their relative distance from C .The strong-LTC procedure for C is described in Figure 3.
Note that Step 1 of the tester T invokes the tester of a uniformly selected inner code (C i ), and so, if the testers of C 1 , . . ., C t make nearly-uniform queries, then Step 1 of T also makes nearlyuniform queries.As for Step 2 of T (which queries a uniformly selected bit in two uniformly selected The strong-LTC Procedure for C Input: a string (y 1 , z 1 ), . . ., (y t , z t ) ∈ {0, 1} n•t .
1.The inner strong-LTC test: Select at random i ∈ [t], and run the strong-LTC tester of C i on (y i , z i ).
2. The common codeword consistency test: Select at random i 1 , i 2 ∈ [t] and j ∈ [n], and reject if the j th bit of y i1 and y i2 differs.
Figure 3: Strong local tester for C y i 's), note that by adding two dummy queries to the second part of each inner code (i.e., query a uniformly selected bit in two uniformly selected z i 's) we ensure that the first test also makes nearly-uniform queries.
The completeness of the tester is straightforward.If (y 1 , z 1 ), . . ., (y t , z t )) is equal to C (x) for some x ∈ {0, 1} k , then: (1) for every i 1 , i 2 ∈ [t] it holds that y i 1 = y i 2 , and (2) for every i ∈ [t] it holds that (y i , z i ) is equal to C i (x).Thus the tester accepts.
Next, we show the soundness of the tester.Let w = (y 1 , z 1 ), . . ., (y t , z t ) be δ C (w)-far from the code C , let u ∈ {0, 1} n be a string that minimizes the value of ∆ (y 1 , . . ., y t ), u t , and let γ = δ( C)/36.Suppose that (y 1 , . . ., y t ) is γ • δ C (w)-far from u t .In this case, the "common codeword consistency test" rejects with probability Thus, in the sequel, we assume that (y 1 , . . ., y t ) is γ • δ C (w)-close to u t .Suppose that u is 3γ • δ C (w)-far from C. Since (y 1 , . . ., y t ) is γ • δ C (w)-close to u t , at least half of the y i 's must be 2γ • δ C (w)-close to u, so these y i 's are γ • δ C (w)-far from Ĉ. Thus, in the invocation of the strong-LTC test of a random C i , with probability 1/2, the test is invoked on a string (y i , z i ) such that y i is γ • δ C (w)-far from the codewords of C. Since |I| = |y i | the tester will reject with probability Ω(δ C (w)).Hence, in the sequel, we assume that u is 3γ • δ C (w)-close to a codeword of C. Since we also assume that (y 1 , . . ., y t ) is γ • δ C (w)-close to u t , then by the triangle inequality, the string (y 1 , . . ., y t ) is 4γ • δ C (w)-close to a (unique, since 4γ < δ( C)/2) codeword C t (x).Furthermore, by an averaging argument, at most δ C (w)/8 fraction of the y i 's are δ( C)/2-far from C(x).

C Robustness of Tensor Codes
In this section we prove Theorem 2.7, which is implicit in [Vid12].Specifically, in [Vid12, Theorem A.5] it is shown that for d ≥ 3, if a codeword w of a d-dimensional tensor code C ⊗d is corrupted, then the corruption in a random hyperplane (i.e., a d−1-dimensional subplane) of w is proportional to the corruption in the entire (d-dimensional) tensor w.By applying this theorem recursively we obtain that for constant values of d ≥ 3, the corruption in a random 2-dimensional plane of a corrupted codeword of C ⊗d is proportional to the corruption in the entire codeword.Formally, we show the following.
Theorem 2.7 (restated).Let C be a linear binary code and d ≥ 3 an integer.Then, there exists a constant c robust ∈ (0, 1) such that for every tensor w ∈ {0, 1} n d it holds that We start by recalling the definition of robustness.Informally, we say that a tester is robust if for every word that is far from the code, the tester's view is far in expectation from any consistent view.This notion was defined for LTCs following an analogous definition for PCPs [BGH + 06].We show that Theorem 2.7 follows by iterative applications of Theorem C.3.
Proof of Theorem 2.7.Let C be a linear code and d ≥ 3 a constant integer.Let w ∈ {0, 1} n d be a tensor.For every 3 ≤ t ≤ d, let T t be the hyperplane tester for C ⊗t .Note that for every 3 ≤ t ≤ d, the tester T t queries a hyperplane that is allegedly a codeword of C ⊗t−1 ; hence T t−1 can be composed with T t ; that is, we can run T t on input w, during which T t generates a local view w| I to be queried, and so, we can run T t−1 on the local view w| I .(Note that the composed tester T 3 • . . .• T d queries the restriction of the input w to a uniformly selected plane p ∈ Planes.)The robustness of the composed tester will hence be By Theorem C.3, for every t ≥ 3 we have ρ Tt C ⊗t ≥ δ(C) t 2t 2 .Thus, for constant d ≥ 3 it holds that c robust ρ T 3 •...•T d C ⊗d is a positive constant that depends only on δ(C) and d.

D Average Smoothness and Error Reduction for Relaxed LDCs
In this appendix, following [BGH + 06, Section 4.2], we show that the modified definition of relaxed-LDCs (see Definition 4.2) implies the standard definition of relaxed-LDCs (see Definition 2.2).Towards this end we need to show the following: (1) The soundness can be increased from Ω(1) (as in Condition 2 of Definition 4.2) to 2/3 (as in Condition 2 of Definition 2.2), and (2) the average smoothness (i.e., Condition 3 of Definition 4.2) can be replaced with the success rate condition (i.e., Condition 3 of Definition 2.2).Both claims were shown in [BGH + 06]; we provide their proofs (adapted to our settings) for completeness.
We start by showing how to perform error-reduction for relaxed-LDC with soundness Ω(1).Recall that the decoder is required to successfully decode each valid codeword, and in addition, given a somewhat corrupted codeword the decoder is required to either decode successfully or abort with probability Ω(1).On the face of it, it may seem that standard error reduction cannot be applied (since we start with a large error probability).However, the error reduction can be simply performed by repeating the execution of the decoder, outputting a bit only if all invocations returned this bit, and aborting otherwise.We remark that the above may cause an increase in the number of indices on which the decoder aborts (with probability at least 2/3).However, in the modified definition (i.e., Definition 4.2) there is no restriction on the success rate.
Proof.Let C be a modified relaxed-LDC.Denote its decoder by D. There exists a constant p > 0 such that for every string w that is sufficiently close to a codeword of C it holds that Pr D [D w (i) = {x i , ⊥}] ≥ p.Consider a decoder D that operates follows: D executes the original decoder D (with fresh randomness) for r times, where r is a constant to be determined later.If all of the executions are consistent, i.e., there exists an a ∈ {0, 1, ⊥} such that in every execution D w (i) = a, then D output a; otherwise, D output ⊥. (We remark that the distribution of queries of D is identical to that of D, and thus D also satisfies the average smoothness condition.) Note that the new decoder D satisfies Condition 1 of Definition 2.2 (the completeness condition).Moreover, D satisfies Condition 2 of Definition 2.2: Indeed, given w that is sufficiently close to C(x), the probability that D errs is at most p = (1 − p) r .Hence, by fixing r = 2/p we get that Pr D [D w (i) = {x i , ⊥}] ≥ 1 − p ≥ 2/3, as needed.
Finally, we show that the average smoothness condition (i.e., Condition 3 of Definition 4.2) can be replaced by the success rate condition (i.e., Condition 3 of Definition 2.2, which limits the number of indices upon which the decoder aborts (with probability at least 2/3)).The key idea is that a decoder that satisfies the completeness and soundness conditions (i.e., Conditions 1 and 2 of Definition 2.2) only aborts if the local view of the codeword that it queries contains a corrupted point.By the average smoothness, on average the decoder will only query a corrupted point with low probability.Thus, by an averaging argument, we can deduce that there is a small number of indices upon which the decoder might abort.Proposition D.2.Let C : {0, 1} k → {0, 1} n be a linear code, and let D be a constant-query decoder for C that satisfies Conditions 1 and 2 of Definition 2.2 as well as Condition 3 of Definition 4.2 (i.e., average smoothness).Then, C satisfies all three conditions of Definition 2.2.
Proof.Let the code C and the decoder D be as in the hypothesis of the proposition.Denote the (constant) query complexity of D by q.According to Condition 1, for any x ∈ {0, 1} k and every i ∈ [k], it holds that Pr D C(x) (i) = x i = 1.Considering any w that is δ-close to C(x) (where δ ≤ δ radius ), the probability that given a uniformly distributed index i ∈ [k] the decoder D queries a location on which w and C(x) disagree is at most q • (2/n) • δn = 2qδ.This is due to the fact that, for a uniformly distributed i, no position is queried with probability greater than 2/n.
Let p w i denote the probability that on input i the decoder D queries a location on which w and C(x) disagree.We have just established that (1/k) • k i=1 p w i ≤ 2qδ.By an averaging argument, for I w {i ∈ [k] : p w i ≤ 1/3}, it holds that |I w | ≥ (1 − 6qδ) • k.Observe that for any i ∈ I w , it holds that Pr[D w (i) = x i ] ≥ 1 − 1/3 = 2/3, as required.

E Proof of Claim 5.6
In this section we provide the proof of Claim 5.6.The proof is similar to the proof of Claim 5.5.However, note that Claims 5.5 and 5.6 deal with different objects: While Claim 5.5 deals with the planes of the tensor code and the plane scPCPPs, Claim 5.6 deals with the lines of the tensor and the point-line scPCPPs.In particular, every plane in the tensor code is coupled with a unique plane scPCPP proof, whereas every line in the tensor code is coupled with n different point-line scPCPPs, one for each point on the line.We begin by restating Claim 5.6.Recall that γ = δ(C)/(24d).
Proof.By the lemma's hypothesis, c is δ c-close to C(x) t 1 , where δ c ≤ γ • δ C (w).By an averaging argument, with probability at least 2/3 the random copy c is 3δ c-close to C(x).We say that a point ī ∈ [n] d in c is corrupted if c ī = C (x) ī and so, there are at most 3δ cn d corrupted points in c.Since there are d • n d−1 axis-parallel lines in c, then on average, the number of corrupted points in a random axis-parallel line is at most 3δcn d d•n d−1 ≤ 3δ cn.Thus, by an averaging argument, we obtain that at most δp 4 fraction of the axis-parallel lines in c contain at least 4 δp • 3δ cn corrupted points.Recall that every axis-parallel line has n corresponding point-line scPCPP proofs (one for each point on ).For every line we view these n proofs as one concatenated proof for the line .By an averaging argument, with probability at least δ p δ plines /2 the random copy p in plines is δ p-far from its corresponding set of canonical proofs, π lines (x).Assume from now on that p is δ p-far from π lines (x).By another averaging argument, at least a δ p/2 fraction of the concatenated line proofs (i.e., proofs which consists of n point-line scPCPP proofs) are δ p/2-far from their corresponding (concatenated) canonical line proofs.
The relaxed-LDC Procedure for C Input: a coordinate ī ∈ [k] d and an oracle access to a string w = (c, plines , pplanes ).For s ∈ [n] and b ∈ {0, 1} let V s,b be a scPCPP verifier that refers to an input of the form z ∈ {0, 1} n , and asserts that there exists y ∈ C 0 such that z = y and z s = b.1.Choose a random copy of a tensor code c in c and a random copy of a set of point-line proofs p in plines .That is, choose uniformly at random r ∈ [t 1 ] and r ∈ [t 2 ], and set c c r and p {p j,ī } ī∈[n] d , j∈[d] plines r .

Figure 1 :
Figure 1: Relaxed local decoder D for C

Figure 2 :
Figure 2: Strong local tester for C