Abstract 1 Introduction 2 Definitions 3 Upper Bound 4 Optimal 2-List Feedback Codes 5 Spencer-Winkler Does Not Give (𝟑,𝟕𝟏𝟓)-List Feedback Codes References

List Decoding Bounds for Binary Codes with Noiseless Feedback

Meghal Gupta ORCID University of California Berkeley, CA, USA Rachel Yun Zhang ORCID Massachusetts Institute of Technology, Cambridge, MA, USA
Abstract

In an error-correcting code, a sender encodes a message x{0,1}k such that it is still decodable by a receiver on the other end of a noisy channel. In the setting of error-correcting codes with feedback, after sending each bit, the sender learns what was received at the other end and can tailor future messages accordingly.

While the unique decoding radius of feedback codes has long been known to be 13, the list decoding capabilities of feedback codes is not well understood. In this paper, we provide the first nontrivial bounds on the list decoding radius of feedback codes for lists of size . For =2, we fully determine the 2-list decoding radius to be 37. For larger values of , we show an upper bound of 1212+22, and show that the same techniques for the =2 case cannot match this upper bound in general.

Keywords and phrases:
error-correcting codes, feedback, list decoding
Funding:
Meghal Gupta: Supported by a NSF GFRP Fellowship.
Rachel Yun Zhang: Supported by NSF Graduate Research Fellowship 2141064. Supported in part by DARPA under Agreement No. HR00112020023 and by an NSF grant CNS-2154149.
Copyright and License:
[Uncaptioned image] © Meghal Gupta and Rachel Yun Zhang; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Mathematics of computing Coding theory
Related Version:
Full Version: https://arxiv.org/abs/2410.01951
Editors:
Raghu Meka

1 Introduction

Consider the problem of error-correcting codes with feedback [3]. Alice wishes to communicate a message x{0,1}k to Bob by transmitting a sequence of n bits over a noisy channel. After sending each bit, she receives feedback through a noiseless channel, telling her what bit Bob actually received. She can use this feedback to adaptively decide her next bit of communication, potentially allowing for improvement over classical error-correcting codes. As in the classical setting, the goal is to communicate in a way such that even if some fraction r of Alice’s bits are adversarially corrupted, Bob can still correctly determine x.

For classical error-correcting codes without feedback, the Plotkin bound [26] states that Bob cannot uniquely decode x if more than 14 of the communication is adversarially corrupted. In other words, the unique decoding radius for binary error-correcting codes is 14.111In this introduction, when we state a resilience threshold/radius r, we mean that for any ϵ>0 there exists a protocol resilient to rϵ errors, with rate Oϵ(1). For sake of simplicity, we’ve omitted all ϵ’s from this introduction. The corresponding question for codes with feedback was first posed by Berlekamp in his 1964 thesis, “Block Coding with Noiseless Feedback”: What is the maximum fraction r of errors from which a feedback-based error-correcting code can still uniquely be decoded? Remarkably, he proved that in the feedback setting, the unique decoding radius increases to 13, surpassing the Plotkin bound.

While Berlekamp’s result tells us the unique decoding radius of feedback codes, little is known about the list-decoding case. In list-decoding, first introduced by Elias [9] and Wozencraft [34], the goal is for Bob to output a small list of messages including Alice’s. The concept of list decoding for non-feedback codes has proven to be quite illuminating for understanding the Hamming space and for various other applications (see e.g. [13] for further discussion).

In the classical setting, list decoding is useful because it can be done at a larger error radius than unique decoding. In fact, the optimal list decoding radius is known for (non-feedback) codes. For decoding into a list of size , the optimal radius is known to be 12(2ss)22s1 where s:=2, which is asymptotically 12Θ(1/2) [5]. This is achieved by positive rate random codes.

In this paper, we investigate the analogous question for feedback codes, first posed in [30]. In a feedback code, what is the maximum fraction r of adversarial errors such that Bob can always learn a list of size containing Alice’s message? We denote such a code as a (,r)-list decodable feedback code. In our work, we provide the first non-trivial unconditional lower and upper bounds for this question.

1.1 Results And Further Discussion

Our first result gives an upper bound on the error resilience (decoding radius) of an -list decodable feedback code.

Theorem 1.

For any positive integer , there is no (,1212+22)-list feedback code.

Prior to this work, the only known bound on the list decoding radius of feedback codes, aside from the trivial upper bound of 12, was given in [30]. That result showed upper bounds for a specific subclass of feedback codes that satisfy a “generalized translation” property. It turns out this generalized translation property is not very comprehensive: we will in fact demonstrate a code that breaks the upper bound proven in [30]. In contrast, our upper bound is unconditional and applies to all feedback codes.

In the case of =1, Theorem 1 provides an upper bound of 13 on the unique decoding radius, consistent with the result in [3]. The key question, however, is whether this bound is tight in general. Although we do not have an answer for arbitrary , we are able to show it is true in the specific case of =2. Specifically, we show that for any ε>0, there exists a (2,37ε)-list feedback code, matching the upper bound established by Theorem 1.

Theorem 2.

For any ε>0, there is a (2,37ε)-list feedback code where n=Oε(k).

The particular feedback code we used to achieve (2,37ϵ)-list decodability is the same code found in [31], which Spencer and Winkler showed achieves unique decoding up to radius 13ϵ. The code, which we’ll denote 𝖲𝖶, is defined in Section 4. Compared to the (13ϵ)-unique decodability analysis in [31], however, our analysis is significantly more involved, requiring us to characterize the likelihood of four inputs rather than one possible input at once.

A reasonable conjecture at this point is that the 𝖲𝖶 protocol will always give a (,1212+22)-list decodable code. This would show that Theorem 1 is optimal.

Unfortunately, this turns out not to be true. We show that for =3, the error resilience of the Spencer-Winkler protocol does not match the upper bound given in Theorem 1. There is an adversarial corruption strategy on 𝖲𝖶 that requires only 31670.4627 fraction of corruptions to confuse Bob between four different inputs, whereas the bound given by Theorem 1 is 7150.4667.

Theorem 3.

For any ε>0, the feedback code 𝖲𝖶 (defined in Section 4) is not (3,3167+ε)-list decodable for any choice of encoding length n=n(k).

Although this result states that the 𝖲𝖶 feedback code is not (3,715)-list decodable, it remains unknown whether a different feedback code achieves this bound or whether the upper bound can be improved.

Even the asymptotic behavior for this question remains unknown. Our upper bound suggests that the feedback list decoding radius could approach 12 exponentially fast with respect to the list size . That is, the radius for -list decoding is given by 122O(). We conjecture that this exponential bound is in fact optimal for the feedback case and propose proving or disproving this as an interesting direction for future research.

In the case of standard ECC’s (without feedback), as mentioned earlier, the optimal list decoding radius is 12(2ss)22s1 for s=2 [5]. That is, for any ε>0, there exist (,12(2ss)22s1ε)-list decodable codes. Asymptotically, this bound says that the list decoding radius is approximately 12Θ(1/2), which approaches 12 much slower than the conjectured 122O() radius for the feedback case. Even short of proving the exponential bound, demonstrating a separation between the asymptotics for the feedback and non-feedback cases would be an intriguing result.

1.2 Related Works

1.2.1 Error Correcting Codes with Feedback

We already mentioned the seminal work of [3] which introduced the concept of error-correcting codes with feedback. Berlekamp studied adversarial corruption strategies and proved that 13 was the optimal error resilience for a feedback ECC in the face of adversarial errors. Feedback ECC’s have also been studied for the binary symmetric channel, where every bit is flipped independently with some fixed probability [4, 35].

Particularly relevant is the work of [30], who initiated the study of list decoding feedback codes and gave some preliminary bounds on the list decoding capacities and radii of codes with noiseless feedback. Using generalizations of Berlekamp’s methods, he provides upper bounds on list-decoding capacities for strategies that satisfy what he call a “generalized translation” property. By contrast, our upper bounds make no assumptions about the types of protocols allowed and are fully general.

Many other works on error-correcting codes with feedback present themselves instead under the lens of adaptive search with lies. The two setups are equivalent, and our results imply results in the field of adaptive search with lies. We discuss the relationship below.

Adaptive Search with Lies

An equivalent way of looking at error-correcting codes with unlimited feedback is as follows: After every message from Alice, Bob asks a yes or no question about Alice’s input, and Alice’s next bit is a response to that question. Then, the adversary can corrupt Alice’s responses, but only up to some fraction α of the time. Bob is “searching” for Alice’s input in a range [N] by asking these yes/no questions, and receiving an incorrect answer up to an α fraction of the time. This game is known as the Rényi-Ulam game, independently introduced by Rényi in [27] and Ulam [33].

Two excellent surveys on the research in this area are presented in [8, 25]. There are a number of works that aspire to improve upon [3]. For example, [22] presents a strategy with optimal rate (asking the least possible number of questions) while still achieving the best error-correction threshold. An alternative construction meeting the same optimal error resilience was given in [31]. We mention that some works focus on a fixed (constant) number of lies rather than resilience to a fraction of errors, and ask for the smallest number of questions necessary. [24] shows the optimal bound for one lie, the original Rényi-Ulam game. Followups find the optimal strategy for two and three lies as well [20, 7].

Limited Feedback/Questions

The works of [23, 1] study the minimal number of rounds, or batches, of questions Bob needs to ask when conducting adaptive search in the Ulam-Renyi model. The work of [25] poses the open question of how many batches of feedback are necessary before the optimal strategy becomes essentially the same as the fully adaptive setting.

In a more code-centric view, this model corresponds to a feedback code with limited feedback, where Alice receives feedback only a few times throughout the protocol instead of after every bit she sends. [21] studied the case where the feedback Alice receives consists of a small fraction of Alice’s rounds. [12] took this direction even further, showing that a constant number of rounds of feedback consisting of only log(n) bits is enough to achieve the same unique decoding radius as for the unlimited feedback case. [29] showed that with O(1) bits of feedback, one can achieve the zero-error capacity of the error channel with full noiseless feedback.

1.2.2 List Decoding

The notion of list decoding was introduced by Elias [9] and Wozencraft [34]. The goal is for Bob to output a small list of messages including Alice’s. Although they introduced list-decoding in the setting of random error models, the main focus since then has been for adversarial error models. See [32, 16, 14] for surveys on the topic.

One main advantage is that list decoding enables correcting up to a factor two more worst-case errors compared to unique decoding. As we stated earlier, the optimal tradeoff between the list size and error tolerance (in the classical non-feedback setting) is known [5]. This work shows that there are positive rate codes achieving this maximum possible error tolerance. An analogous result for q-ary codes was shown in [28].

Another advantage of list-decoding is that it allows for better rate codes. One can ask what the optimal tradeoff between the error tolerance r and the rate R of the code is, when the list size is required to small or constant, but not necessarily achieving the optimal bounds above. In the large alphabet setting, a simple random coding argument shows that r>1Ro(1) is achievable. The work of  [19] provides an explicit positive rate efficiently encodable/decodable code for this task using folded Reed Solomon codes. This research has since been expanded upon, including in [15, 6, 11, 2].

Works such as [18, 17] also discuss the tradeoff between all three parameters: list size, error tolerance, and rate.

2 Definitions

Before stating our results, let us formally define a list decodable feedback ECC.

Definition 4 ((,r)-List Decodable Feedback Code).

A (,r)-list feedback code is a family of protocols {πk}k from Alice to Bob defined as follows for each k.

  • Alice begins with an input x{0,1}k that she intends to communicate to Bob.

  • In each of n=n(k)=|πk| rounds, Alice sends a single bit that may be flipped (by an adversary). At the end of the round, Alice learns whether the bit was flipped.

  • At the end of the protocol, Bob must output a list L of size that must contain x, as long as at most rn bits were flipped throughout the protocol.

The Coin Game Reformulation

Throughout the remainder of the paper, we’ll argue our results about list decodable feedback ECC’s in the language of a coin game played between two players Bob and Eve. To this end, we reformulate the task of communicating over a noisy channel with feedback as this aforementioned coin game. A version of this coin game reformulation was first given by [31] but is quite similar to the adaptive search with lies formulation of [33] and [27].

Definition 5 ((,r;K,n)-Coin Game).

In the coin game parametrized by K,n, and r[0,1], two players Bob and Eve are playing a game on a number line. At the beginning of the game, all K coins are at position 0. In each of n rounds, Bob partitions the coins into two sets, and Eve chooses a set for which Bob moves all the coins up by 1.

Eve wins if, at the end of the n rounds, there are +1 coins all with positions rn, and Bob wins otherwise.

The connection to feedback codes is as follows. The K=2k coins correspond to all possible values of Alice’s input. For each of the n rounds of the feedback code, Bob partitions the coins into two sets based on the two possibilities 0 and 1 for Alice’s next message. Based on the bit that he receives, he increments the position of all coins in the opposite set; that is, the coins for which Alice would have sent the opposite bit. In this manner, at any point in time, each coin’s position is the number of corruptions that must have occurred so far if the given coin were actually Alice’s input.

Let’s say we want a scheme that is (,r)-list decodable. Bob wins if at the end of the n rounds, there are at most coins left with positions <rn (we say all other coins have “fallen off the board”). Eve’s goal, on the other hand, is to pick a set that Bob hears each round so that at the end of the game there are > coins left on the board. If she’s able to pick any sequence of sets such that > coins are left on the board at the end of the game, then the resulting code is necessarily not (,r)-list decodable, since any of the > coins left on the board could’ve been Alice’s input.

Notation

Let us define the notation that we will use for a coin game.

  • For a coin x, 𝗉𝗈𝗌t(x) denotes the position of coin x after t rounds.

  • ct(i) denotes the coin with the i’th smallest position at t rounds.

  • For i[K], 𝗉𝗈𝗌𝖼t(i)=𝗉𝗈𝗌t(c(i)) denotes the position of the coin with the i’th smallest position after t rounds.

3 Upper Bound

In this section, we show an upper bound (impossibility result) that no -list feedback code can achieve an error resilience greater than 1212+22=212+11. Let us state the main theorem of this section.

See 1

In the formulation of the coin game, we wish to establish that in an n-round protocol with K coins, the adversary can always maintain +1 coins below position 212+11n for any K sufficiently large compared to . From this, we formally conclude Theorem 1 in Section 3.3.

We begin by establishing the statement for =1; that is, the adversary can always maintain 2 coins at or below position 13n. This statement was known as early as [3] when the concept of feedback codes was introduced, but we recreate a proof here. Our proof for general is inductive and therefore requires establishing the =1 case first.

3.1 Base Case: 1-List Feedback Codes

We want to show that the adversary can always maintain the second coin at or below position 13n. However, in order to use our statement as an appropriate base case for the inductive hypothesis in Section 3.2, we actually show the following stronger statement about partially completed coin games.

Consider a partially completed coin game where m of the n rounds have been played already. Then, the adversary can limit 𝗉𝗈𝗌𝖼n(2) in terms of these variables, specifically by the expression max{13(nm+𝗉𝗈𝗌𝖼m(1)+𝗉𝗈𝗌𝖼m(2)+𝗉𝗈𝗌𝖼m(3)),𝗉𝗈𝗌m(2)}. In particular, the case where m=0 corresponds to the situation where the entire coin game is remaining, and indeed the expression says that 𝗉𝗈𝗌𝖼n(2) is at most 13n.

Theorem 6.

For any ε>0, the following holds for sufficiently large n. Consider an n-round coin game on K=3 coins in which m rounds have already been completed. Then, the adversary can ensure at the end of the n rounds that

𝗉𝗈𝗌𝖼n(2)max{13(nm+𝗉𝗈𝗌𝖼m(1)+𝗉𝗈𝗌𝖼m(2)+𝗉𝗈𝗌𝖼m(3)),𝗉𝗈𝗌𝖼m(2)}.

Before proving this, let us informally describe the strategy Eve uses to achieve this bound. This strategy is essentially identical to what has appeared in previous literature such as [10]. For most of the protocol, she simply chooses the smaller (fewer elements) of the two sets given by Bob. That way, she need only move up one coin, and so on average, a coin move up by 13, which is what gives the 13(nm) term in the stated bound. However, the issue is that the coins need not move evenly. For example, the first coin may not move at all, while the last two coins each move by 12(nm). Then, the position of the second coin increases too much, but in compensation, we have some leeway with the first coin. To use this, once the first coin is sufficiently low in position that it can never surpass the second coin, Eve switches strategies to always choosing the set that does not contain the second coin. Now, although the average coin can move up more per round, the second coin no longer moves which is what we wanted, and we do not risk the first coin catching up to the second by doing this.

Proof.

Let θ=max{13(nm+𝗉𝗈𝗌𝖼m(1)+𝗉𝗈𝗌𝖼m(2)+𝗉𝗈𝗌𝖼m(3)),𝗉𝗈𝗌𝖼m(2)}. In each round t, Bob chooses the set St and its complement St¯ (both subsets of the 3 coins in the game), and the adversary Eve needs to choose one set for which to move up the coins by 1 each. Eve uses the following strategy.

For a round t, while 𝗉𝗈𝗌𝖼t1(2)<θ, Eve always chooses the smaller of St and St¯, that is the one with fewer elements. (Note that the inequality in this condition is strict, so, for example, if 𝗉𝗈𝗌𝖼m(2)13(nm+𝗉𝗈𝗌𝖼m(1)+𝗉𝗈𝗌𝖼m(2)+𝗉𝗈𝗌𝖼m(3)), it is possible to skip this step entirely.)

Once 𝗉𝗈𝗌𝖼t(2) reaches θ, then Eve instead always selects the set not containing the coin in the second position.

Let us say this transition between the two strategies occurs at round T+1, so Eve spends rounds m+1 through T inclusive doing the first strategy, and the rest of the protocol doing the second strategy. (If this round T never occurs, then at the end of the protocol, 𝗉𝗈𝗌𝖼n(2)<θ as we desired.) Our goal will be to show that in the remaining nT rounds, the first coin will not pass the second coin, even if it is selected to increase every time. Then there will be two coins left at the end of the protocol, both with position θ.

Between rounds m+1 and T, in every step, the total positions of the coins are increasing by at most 1. Also, the first time the inequality is violated, we must have 𝗉𝗈𝗌𝖼T(2)=θ13(nm+𝗉𝗈𝗌𝖼m(1)+𝗉𝗈𝗌𝖼m(2)+𝗉𝗈𝗌𝖼m(3)). Therefore, at the end of round T, it holds that

𝗉𝗈𝗌𝖼T(1)+𝗉𝗈𝗌𝖼T(2)+𝗉𝗈𝗌𝖼T(3)𝗉𝗈𝗌𝖼m(1)+𝗉𝗈𝗌𝖼m(2)+𝗉𝗈𝗌𝖼m(3)+Tm
𝗉𝗈𝗌𝖼T(1)+2𝗉𝗈𝗌𝖼T(2)𝗉𝗈𝗌𝖼m(1)+𝗉𝗈𝗌𝖼m(2)+𝗉𝗈𝗌𝖼m(3)+Tm
𝗉𝗈𝗌𝖼T(1)𝗉𝗈𝗌𝖼T(2)+313(nm+𝗉𝗈𝗌𝖼m(1)+𝗉𝗈𝗌𝖼m(2)+𝗉𝗈𝗌𝖼m(3))
𝗉𝗈𝗌𝖼m(1)+𝗉𝗈𝗌𝖼m(2)+𝗉𝗈𝗌𝖼m(3)+Tm
𝗉𝗈𝗌𝖼T(1)𝗉𝗈𝗌𝖼T(2)
𝗉𝗈𝗌𝖼m(1)𝗉𝗈𝗌𝖼m(2)+𝗉𝗈𝗌𝖼m(3)+Tm(nm+𝗉𝗈𝗌𝖼m(1)+𝗉𝗈𝗌𝖼m(2)+𝗉𝗈𝗌𝖼m(3))
Tn
𝗉𝗈𝗌𝖼T(2)𝗉𝗈𝗌𝖼T(1)nT

In other words, in the remaining nT rounds, the first coin will not be able to switch positions with the second or third coin. As such, 𝗉𝗈𝗌𝖼(2) will never increase since it is never selected, and the first coin never surpasses it.

This concludes the proof, as it means that 𝗉𝗈𝗌𝖼n(2) stays at θ (or never reached θ).

3.2 Inductive Step: -List Feedback Codes

In this section, we establish the inductive step. We inductively show an analog of Theorem 6 for larger list sizes.

Theorem 7.

For any positive integer , the following holds for sufficiently large n. Consider an n-round coin game on K=2+11 coins in which m rounds have already been completed. For round m[n], let

θ:=12+11((21)(nm)+𝗉𝗈𝗌𝖼m(1)++𝗉𝗈𝗌𝖼m(K)).

If 𝗉𝗈𝗌𝖼m(1)θ+mn then Eve can ensure at the end of the n rounds that 𝗉𝗈𝗌𝖼n(+1)θ.

Again, before proving this formally, let us describe Eve’s strategy at a high level. Her strategy to achieve this bound will be recursive. At the start of the protocol, she chooses the smaller of Bob’s two sets at every step, so that the average coin moves by 212+11, giving the 212+11(nm) term in the bound. As with the =1 case, this doesn’t give the desired bound for the final position of the (+1)’th coin because it’s possible the first few coins did not move at all, while the (+1)’th coin moved by more. However, it can be shown that in compensation, the position of the first coin must be very low in this case. Once the first coin is at a low enough position that even if it is selected in every remaining round, it can no longer exceed θ by the end of the protocol, Eve switches strategies. Now, she needs to ensure that of the remaining coins, of them remain below position θ (since we have already found one coin that will definitely remain below θ). This is where her strategy becomes recursive: she simulates a new coin game on the leftmost 21 coins of the 2+12 remaining coins and plays on these coins according to her (recursively described) strategy for the case of 1.

Proof.

First we check that the statement holds for the base case =1. This follows from Theorem 6. In this case, we have that

θ1:= 13(nm+𝗉𝗈𝗌𝖼m(1)+𝗉𝗈𝗌𝖼m(2)+𝗉𝗈𝗌𝖼m(3))
13(nm+𝗉𝗈𝗌𝖼m(1)+𝗉𝗈𝗌𝖼m(2)+𝗉𝗈𝗌𝖼m(3))
13(θ1+2𝗉𝗈𝗌𝖼m(2)).

In the last step, we used the condition in the Theorem 7 that 𝗉𝗈𝗌𝖼m(1)θ+mn. Rearranging, this implies that θ1𝗉𝗈𝗌𝖼m(2). Theorem 6 states that 𝗉𝗈𝗌𝖼n(2)max{θ1,𝗉𝗈𝗌𝖼m(2)}=θ1, since θ1𝗉𝗈𝗌𝖼m(2), which is what we wanted to prove.

For the inductive step, let us assume the statement is true for 1 and prove it for . Throughout the proof, let Σm:=𝗉𝗈𝗌𝖼m(1)++𝗉𝗈𝗌𝖼m(K).

In each round t, Bob’s two sets are St and its complement St¯, and the adversary Eve needs to choose one set for which to move up the coins by 1 each. Eve uses the following strategy.

For a round t, while 𝗉𝗈𝗌𝖼t1(1)>θ+(t1)n, Eve always chooses the one of St and St¯ with fewer elements. Note that the first time it is violated, 𝗉𝗈𝗌𝖼t(1)=θ+tn, even if it is already violated immediately at round t=m+1 by the assumption. Denote the round it is first violated as T if it is ever violated.

At this point (after round T) or at the start of the protocol if the condition began by being violated, it is impossible for the coin cT(1) to ever hit the threshold θ, since its position can only increase by 1 for each of the remaining nT rounds. Thus Eve can ignore the first coin, and play a coin game on only the remaining 2+11 coins, with the goal of keeping 𝗉𝗈𝗌𝖼n() below θ. She also opts to exclude the second half of these coins cT(2+1),,cT(2+11) from her new coin game as well, and simply assumes that they will end up above the threshold. In all, the players are playing a new a coin game on 21 coins which correspond to cT(2),,cT(2), with Eve having the goal of keeping 𝗉𝗈𝗌𝖼n() below θ. She executes the strategy for this recursively, keeping 𝗉𝗈𝗌𝖼n() bounded by what Theorem 7 says for 1.

Now, let us analyze this strategy.

First, we address the case where round T never occurs. In this case, at the end of the protocol, it holds that 𝗉𝗈𝗌𝖼n(1)>θn+n=θ. Since Eve is picking the smaller of St and St¯ each time, it holds that

𝗉𝗈𝗌𝖼n(1)++𝗉𝗈𝗌𝖼n(2+11)(21)(nm)+Σm
𝗉𝗈𝗌𝖼n(1)12+11((21)(nm)+Σm)θ,

which is a contradiction, where the last inequality follows because the middle term is at most θ by definition.

Next, we address the case where round T occurs at some time mTn. After round T, it holds that 𝗉𝗈𝗌𝖼T(1)=θ+Tn exactly, and so

𝗉𝗈𝗌𝖼T(1)++𝗉𝗈𝗌𝖼T(2+11)(21)(Tm)+Σm
𝗉𝗈𝗌𝖼T(2)++𝗉𝗈𝗌𝖼T(2+11)(22)T(21)mθ+n+Σm
𝗉𝗈𝗌𝖼T(2)++𝗉𝗈𝗌𝖼T(2)(211)T212m+12(n+Σmθ). (1)

The coin in position 1 at time T can never fall off the board, because its position can only increment by 1 in each of the last Tn rounds, and so we can disregard it. We will now apply the inductive hypothesis to a new game played on the 21 coins in positions 2 through 2 at time T, corresponding to the coins in positions 2 through 2 at time T. We will renumber the coins, so we can view this new game as being played on 21 coins numbered 1 through 21, where the position functions are given by 𝗉𝗈𝗌 and 𝗉𝗈𝗌𝖼. Denote

θ1:=121((211)(nT)+𝗉𝗈𝗌𝖼T(1)++𝗉𝗈𝗌𝖼T(21)).

Our goal is to show that θ1θ. This will prove the inductive step for the following reasons:

  • We have that 𝗉𝗈𝗌𝖼T(1)=𝗉𝗈𝗌𝖼T(2)𝗉𝗈𝗌𝖼T(1)=θT+nθ1T+n, so the condition for the inductive step of the theorem is met.

  • We know that 𝗉𝗈𝗌𝖼n(+1)max{𝗉𝗈𝗌𝖼n(),θ}, because coin cT(1) cannot surpass θ and all of coins cT(2)cT(+1) are at most 𝗉𝗈𝗌𝖼n(). Moreover, by the inductive hypothesis, we know that 𝗉𝗈𝗌𝖼n()<θ1θ, and so 𝗉𝗈𝗌𝖼n(+1)θ as we wanted.

Now, we show the desired statement that θ1θ. Using the bound (1) we found on 𝗉𝗈𝗌𝖼T(2)++𝗉𝗈𝗌𝖼T(2)=𝗉𝗈𝗌𝖼T(1)++𝗉𝗈𝗌𝖼T(21), we have that

θ1 =121((211)(nT)+𝗉𝗈𝗌𝖼T(1)++𝗉𝗈𝗌𝖼T(21))
<121((211)(nT)+(211)T212m+12(n+Σmθ))
=12+12((21)(nm)+Σmθ)
12+12(112+11)((21)(nm)+Σm)
=12+11((21)(nm)+Σm)=θ.

This concludes the inductive step of Theorem 7.

3.3 Conclude Theorem 1

Finally, let us conclude Theorem 1. Let the number of coins K be at least 8(2+11). For Eve’s first move, have her choose the one of S1 and S1¯ (Bob’s two provided sets) with fewer elements, so that at least half the coins remain at position 0. For her second move, have her choose the one of S2 and S2¯ that keeps the most coins at 0, which will keep at leat 14 of the coins at 0. She will do this one more time, so that after 3 moves, there are at least 2+11 coins at position 0.

Even if there are more than 2+11 coins at position 0, from now, Eve will fix a subset of 2+11 coins to play a coin game on. She wants to show that in this new n3 turn coin game, she can keep 𝗉𝗈𝗌𝖼n3(+1)212+11n. This follows by setting m=0 in Theorem 7. The theorem states that Eve can always guarantee that +1 coins stay at or below position 212+11(n3)212+11n. Therefore (+1,212+11)-list decoding is not possible.

4 Optimal 2-List Feedback Codes

In this section, we will prove that the feedback ECC from Spencer-Winkler [31] achieves (2,37ϵ)-list decodability, matching the upper bound for =2 given in Theorem 1.

4.1 The Spencer-Winkler Coin Game

Let us begin by defining the Spencer-Winkler strategy for the coin game.

Protocol 1 : The Spencer-Winkler (𝖲𝖶) Coin Game

Alice has a coin x{0,1}k. Bob and Eve play the coin game with K=2k coins. In each of n rounds, Bob partitions the coins into two sets based on the parity of their positions on the board. That is, he numbers the coins from 1 to K, where 1 is assigned to the coin with the smallest position, and K is the coin that is furthest away. The odd numbered coins are in one set, and the even numbered coins are in the other. Eve chooses a set whose coins to move up by 1.

We will also denote by 𝖲𝖶 the feedback code corresponding to this family of coin game strategies. When defining a feedback code 𝖲𝖶, we must also specify a choice of the code length n(k). It was shown in [31] that 𝖲𝖶 achieves the unique decoding bound of 13ϵ for feedback codes.

Theorem 8 ([31]).

Let n=kϵ. Then in the 𝖲𝖶 coin game, no matter which sets Eve chooses each round, it will hold that

𝗉𝗈𝗌𝖼n(2)>(13O(ϵ))n.

Thus the 𝖲𝖶 coin game gives a feedback code that is uniquely decodable up to radius 13O(ϵ).

Our goal through the rest of this section is to prove a statement about the list decodability of 𝖲𝖶  to list size 2. This proves Theorem 2.

Theorem 9.

In the 𝖲𝖶 coin game, no matter which sets Eve chooses each round, it holds that

𝗉𝗈𝗌𝖼n(3)>(372ϵ)n.

That is, the 𝖲𝖶 coin game with n=24kϵ3 gives a feedback code that is (2,272ϵ)-list-decodable.

4.2 Some Useful Inequalities

Throughout this section, let n=24kϵ3. Our proof that the 𝖲𝖶 protocol achieves (2,37)-list decoding follows from two inequalities, given in Lemmas 11 and 12. First, let us prove a statement about how the position of the i’th coin at each round can change.

Claim 10.

For any t[n1] and for any i[K], it holds that

𝗉𝗈𝗌𝖼t+1(i)𝗉𝗈𝗌𝖼t(i){0,1}.
Proof.

The i’th coin ct+1(i) at time t+1 being at position 𝗉𝗈𝗌𝖼t+1(i) means that at time t there must’ve been i1 coins with positions 𝗉𝗈𝗌𝖼t+1(i) and Ki coins with positions 𝗉𝗈𝗌t(ct+1(i))𝗉𝗈𝗌𝖼t+1(i)1. This means that the i’th coin at time t must’ve had position in the range [𝗉𝗈𝗌𝖼t+1(i)1,𝗉𝗈𝗌𝖼t+1(i)], from which the claim follows.

Now, we are ready to state our first key lemma, that the gaps between the positions of the first two coins must always be at least the gap between the positions of the third and fourth coins at any timestep.

Lemma 11.

At any point in the 𝖲𝖶 protocol, the following inequality holds:

𝗉𝗈𝗌𝖼t(2)𝗉𝗈𝗌𝖼t(1)+1𝗉𝗈𝗌𝖼t(4)𝗉𝗈𝗌𝖼t(3).
Proof.

We prove this via straightforward induction. Let us assume at time t that 𝗉𝗈𝗌𝖼t(2)𝗉𝗈𝗌𝖼t(1)+1𝗉𝗈𝗌𝖼t(4)𝗉𝗈𝗌𝖼t(3). This is certainly true at time t=0, since all coins are at position 0.

By applying Claim 10, it follows that 𝗉𝗈𝗌𝖼t+1(2)𝗉𝗈𝗌𝖼t+1(1)=𝗉𝗈𝗌𝖼t(2)𝗉𝗈𝗌𝖼t(1)+{1,0,1} and similarly 𝗉𝗈𝗌𝖼t+1(4)𝗉𝗈𝗌𝖼t+1(3)=𝗉𝗈𝗌𝖼t(4)𝗉𝗈𝗌𝖼t(3)+{1,0,1}. This means that (𝗉𝗈𝗌𝖼(4)𝗉𝗈𝗌𝖼(3))(𝗉𝗈𝗌𝖼(2)𝗉𝗈𝗌𝖼(1)) can change by at most 2 each round, and we can focus on the relevant case that

(𝗉𝗈𝗌𝖼t(4)𝗉𝗈𝗌𝖼t(3))(𝗉𝗈𝗌𝖼t(2)𝗉𝗈𝗌𝖼t(t)){0,1}.

We split the induction into two cases. The first is if Bob moves the odd set. In this case, Bob will move up coins ct(1) and ct(3). If ct(4)ct(3)>0, then this means that ct(4)ct(3) will decrease by 1, and ct(2)ct(1) will change by at most 1, so (11) will still hold at time t+1. If ct(4)ct(3)=0, then ct+1(4)ct+1(3)1, and also by (4.2) it must be true that 𝗉𝗈𝗌𝖼t(2)𝗉𝗈𝗌𝖼t(1)=0, so ct+1(2)ct+1(1){0,1} as well. Therefore, (ct+1(4)ct+1(3))(ct+1(2)ct+1(1))1 as desired.

The second case is if Bob moves the even set. In this case, Bob will move up coins ct(2) and ct(4). If 𝗉𝗈𝗌𝖼t(3)𝗉𝗈𝗌𝖼t(2)>0, then ct(1),ct(2), and ct(3) will still be the first, second, and third coins at time t+1. Then 𝗉𝗈𝗌𝖼t+1(2)𝗉𝗈𝗌𝖼t+1(1)=(𝗉𝗈𝗌𝖼t(2)+1)𝗉𝗈𝗌𝖼t(1), while 𝗉𝗈𝗌𝖼t+1(4)𝗉𝗈𝗌𝖼t+1(3)(𝗉𝗈𝗌𝖼t(4)+1)𝗉𝗈𝗌𝖼t(3) by Claim 10, so (11) clearly follows. On the other hand, if 𝗉𝗈𝗌𝖼t(2)=𝗉𝗈𝗌𝖼t(3), then moving coins the second and fourth coins at time t is equivalent to moving the third and fourth coins at time t, so 𝗉𝗈𝗌𝖼t+1(2)𝗉𝗈𝗌𝖼t+1(1)=𝗉𝗈𝗌𝖼t(2)𝗉𝗈𝗌𝖼t(1). If 𝗉𝗈𝗌𝖼t(4)>𝗉𝗈𝗌𝖼t(3), then 𝗉𝗈𝗌𝖼t+1(3)=𝗉𝗈𝗌𝖼t(3)+1, so 𝗉𝗈𝗌𝖼t+1(4)𝗉𝗈𝗌𝖼t+1(3)𝗉𝗈𝗌𝖼t(4)𝗉𝗈𝗌𝖼t(3), and (11) follows. Otherwise if 𝗉𝗈𝗌𝖼t(4)𝗉𝗈𝗌𝖼t(3)=0, this means by (4.2) that 𝗉𝗈𝗌𝖼t(2)𝗉𝗈𝗌𝖼t(1)=0, and since 𝗉𝗈𝗌𝖼t+1(4)=𝗉𝗈𝗌𝖼t(4)+{0,1} and 𝗉𝗈𝗌𝖼t+1(3)=𝗉𝗈𝗌𝖼t(3)+{0,1} by Claim 10, we have that 𝗉𝗈𝗌𝖼t+1(4)𝗉𝗈𝗌𝖼t+1(3)1, so again (11) follows.

The second key lemma is about the sum of the positions of the first four coins at any timestep.

Lemma 12.

At any round 0τn, the following holds:

12𝗉𝗈𝗌𝖼τ(1)+𝗉𝗈𝗌𝖼τ(2)+𝗉𝗈𝗌𝖼τ(3)+𝗉𝗈𝗌𝖼τ(4)3τ25ϵn.
Proof.

In order to analyze the positions of four coins collectively, we define a new derivative coin game that we call the quadruple coin game. The coins in this quadruple coin game are four-tuples of coins in the old game, for a total of (K4) coins in the new game. Let the set of coins in the original game be denoted S, and let the set of quadruple coins be denoted S𝗊. The position of quadruple coin (x1,x2,x3,x4)S𝗊 at time t is 12𝗉𝗈𝗌t(x1)+𝗉𝗈𝗌t(x2)+𝗉𝗈𝗌t(x3)+𝗉𝗈𝗌t(x4), where here we assume that the four coins x1,x2,x3,x4 are ordered according to increasing position at time t.

We denote by ct𝗊(i1,i2,i3,i4) the quadruple coin (ct(i1),ct(i2),ct(i3),ct(i4)), and by 𝗉𝗈𝗌𝖼t𝗊(i1,i2,i3,i4)=post𝗊(ct(i1,i2,i3,i4)) its position at time t. Our goal is to show that in this quaduple coin game, coin cn(1,2,3,4) will be at position 32n4ϵn at the end of the protocol.

To do this, we will show that on average, the quadruple coins must be moving at 32 units per round, and furthermore that no coins are left behind. What we mean by the latter is that for any quadruple coin that doesn’t move enough in a given round, there is some earlier coin that is moving more than enough and compensating for the slower movement by the first coin. Precisely, we will do this via a bijection.

Let us first define ut(i) to be the amount that coin ct(i) is moved in round t+1. We remark that this is not the value 𝗉𝗈𝗌𝖼t+1(i)𝗉𝗈𝗌𝖼t(i), since after round t, the coins may change their orders so that ct(i) is no longer the i’th coin. We will also define ut𝗊(i1,i2,i3,i4) to be the amount that quadruple coin ct𝗊(i1,i2,i3,i4) is moved in time t+1. In general, when we write a 4-tuple, the four numbers should be in increasing order. We can also define the poset relation on 4-tuples of indices in [K] by (i1,i2,i3,i4)<(j1,j2,j3,j4) if iιjι for all ι[4].

Claim 13.

At any time t there is a bijection of indices ([K]4)(i1,i2,i3,i4)(j1,j2,j3,j4)([K]4) such that for any bijected values (i1,i2,i3,i4)(j1,j2,j3,j4),

ut𝗊(i1,i2,i3,i4)+ut𝗊(j1,j2,j3,j4)3,

and furthermore if ut𝗊(i1,i2,i3,i4)<32 then (j1,j2,j3,j4)<(i1,i2,i3,i4).

Proof.

We will define this bijection φt by defining φt on tuples (i1,i2,i3,i4) for which ut𝗊(i1,i2,i3,i4)<32. For all other tuples that remain unbijected after this process, we can pair them up arbitrarily.

Consider a tuple (i1,i2,i3,i4) for which ut𝗊(i1,i2,i3,i4)<32. Let’s suppose that Bob is moving the even coins in round t+1 (we’ll define the bijection for the odd case in the same way, except switching the words “even” and “odd” everywhere). The only way ut𝗊(i1,i2,i3,i4) can be less than 32 is if there is at most one even index among the four. Let us define the map φt depending on which of the iι is even.

φt(i1,i2,i3,i4)={(i1,i21,i31,i41)i1,i2,i3,i4 all odd(i1,i2,i31,i4)i1,i2,i3 odd and i4 even(i1,i21,i3,i4)i1,i2,i4 odd and i3 even(i1,i2,i3,i41)i1,i3,i4 odd and i2 even(i1,i2,i3,i41)i2,i3,i4 odd and i1 even.

It’s straightforward to check that these are all well defined, and that each of the bijected tuples contains four distinct elements. (Recall we always assume i1<i2<i3<i4.)

Next, we also need to check that φt(i1,i2,i3,i4)φt(i1,i2,i3,i4) for all (i1,i2,i3,i4)(i1,i2,i3,i4). To see this, we show how to invert each tuple in the image of the map φt as we’ve defined so far. The key observation is that the five cases give five different parity tuples for the output tuple: for instance, the first case that i1,i2,i3,i4 are all odd results in an output where the parities are (odd, even, even, even), whereas all the rest of the maps give outputs where two indices are odd and two are even, so we always know how to invert a tuple that is (odd, even, even, even).

φt1(j1,j2,j3,j4)={(j1,j2+1,j3+1,j4+1)j1 odd and j2,j3,j4 odd(j1,j2,j3+1,j4)j1,j2 odd and j3,j4 even(j1,j2+1,j3,j4)j1,j4 odd and j2,j3 even(j1,j2,j3,j4+1)j1,j3 odd and j2,j4 even(j1,j2,j3,j4+1)j2,j3 odd and j1,j4 even.

Finally, we check that ut𝗊(i1,i2,i3,i4)+ut𝗊(φt(i1,i2,i3,i4))3. In the first case that i1,i2,i3,i4 are all odd and (i1,i2,i3,i4)t𝗊=0, φt(i1,i2,i3,i4) has three even indices, so ut𝗊(φt(i1,i2,i3,i4))=3. In all the other cases, (i1,i2,i3,i4) has one even index and φt(i1,i2,i3,i4) has two even indices so ut𝗊(i1,i2,i3,i4)+ut𝗊(φt(i1,i2,i3,i4))=1+2=3.

Now, let us define the potential function

ψ(t)=(x1,x2,x3,x4)({0,1}k4)wt(x1,x2,x3,x4),

where

wt(x1,x2,x3,x4)=(1+ϵ)13dt(x1,x2,x3,x4)

and

dt(x1,x2,x3,x4)=max{0,3t2𝗉𝗈𝗌t𝗊(x1,x2,x3,x4)}.

Since 𝗉𝗈𝗌0𝗊(x1,x2,x3,x4)=0 for all (x1,x2,x3,x4) at the start of the game, ψ(0)=(K4).

Claim 14.

For any time t, let us consider any bijected pair (i1,i2,i3,i4)(j1,j2,j3,j4) under φt. Define (x1,x2,x3,x4)=ct𝗊(i1,i2,i3,i4) and (y1,y2,y3,y4)=ct𝗊(j1,j2,j3,j4). Then,

wt+1(x1,x2,x3,x4)+wt+1(y1,y2,y3,y4)eϵ2/2(wt(x1,x2,x3,x4)+wt(y1,y2,y3,y4))+ϵ,
Proof.

We proceed by casework on whether ut(x1,x2,x3,x4) and ut(y1,y2,y3,y4) are greater than or less than 32.

Case 1: Both ut(x1,x2,x3,x4) and ut(y1,y2,y3,y4) are at least 32.

  • In this case, dt+1(x1,x2,x3,x4)dt(x1,x2,x3,x4) and dt+1(y1,y2,y3,y4)dt(y1,y2,y3,y4), so

    wt+1 (x1,x2,x3,x4)+wt+1(y1,y2,y3,y4)
    wt(x1,x2,x3,x4)+wt(y1,y2,y3,y4)
    eϵ2/2(wt(x1,x2,x3,x4)+wt(y1,y2,y3,y4))+ϵ.

Case 2: ut(x1,x2,x3,x4)<32.

  • We write ut(x1,x2,x3,x4)=32δ for some δ>0. Then dt+1(x1,x2,x3,x4)dt(x1,x2,x3,x4)+δ and ut(y1,y2,y3,y4)32δ. Let us again do casework on whether dt(y1,y2,y3,y4) is larger or smaller than δ.

    Subcase 2.1: dt(y1,y2,y3,y4)δ.

    • In this case, dt+1(y1,y2,y3,y4)dt(y1,y2,y3,y4)δ. Then

      wt+1 (x1,x2,x3,x4)+wt+1(y1,y2,y3,y4)
      (1+ϵ)13δwt(x1,x2,x3,x4)+(1+ϵ)13δwt(y1,y2,y3,y4)
      (1+ϵ)13δ+(1+ϵ)13δ2(wt(x1,x2,x3,x4)+wt(y1,y2,y3,y4))
      (1+ϵ)+(1+ϵ)12(wt(x1,x2,x3,x4)+wt(y1,y2,y3,y4))
      eϵ2/2(wt(x1,x2,x3,x4)+wt(y1,y2,y3,y4))
      eϵ2/2(wt(x1,x2,x3,x4)+wt(y1,y2,y3,y4))+ϵ,

      where the second inequality follows from (j1,j2,j3,j4)<(i1,i2,i3,i4)wt(x1,x2,x3,x4)wt(y1,y2,y3,y4) and the rearrangement inequality, and the thir inequality follows from 13δ12<1.

    Subcase 2.2: dt(y1,y2,y3,y4)<δ.

    • In this case, dt+1(y1,y2,y3,y4)=0, and dt+1(x1,x2,x3,x4)dt(x1,x2,x3,x4)+δdt(y1,y2,y3,y4)+δ<2δ. Then

      wt+1 (x1,x2,x3,x4)+wt+1(y1,y2,y3,y4)
      (1+ϵ)232δ+(1+ϵ)0
      2+ϵ
      (wt(x1,x2,x3,x4)+wt(y1,y2,y3,y4))+ϵ
      eϵ2/2(wt(x1,x2,x3,x4)+wt(y1,y2,y3,y4))+ϵ.

Now, using Claim 14 and summing over all bijected tuples (i1,i2,i3,i4)(j1,j2,j3,j4), we get

ψ(t+1)eϵ2/2ψ(t)+ϵ(K4)2.

Solving the reccurence, we get

ψ(t+1) e(t+1)ϵ2/2(K4)+e(t+1)ϵ2/21eϵ2/21ϵ(K4)2
e(t+1)ϵ2/2(K4)(1+12(eϵ2/21))
e(t+1)ϵ2/2(K4)(1+1ϵ).

Finally, we get our bound on 12𝗉𝗈𝗌𝖼τ(1)+𝗉𝗈𝗌𝖼τ(2)+𝗉𝗈𝗌𝖼τ(3)+𝗉𝗈𝗌𝖼τ(4). Let x1=cτ𝗊(1), x2=cτ𝗊(2), x3=cτ𝗊(3), and x4=cτ𝗊(4). We have that

wτ(x1,x2,x3,x4)=(1+ϵ)13dτ(x1,x2,x3,x4) <ψ(τ)
eτϵ2/2(K4)(1+1ϵ)
eτϵ2/2K4(1+1ϵ),

which gives

3τ2𝗉𝗈𝗌τ𝗊(x1,x2,x3,x4) dτ(x1,x2,x3,x4)
<3ln(1+ϵ)(τϵ2/2+4lnK+ln(1+1ϵ))
6ϵ(nϵ2/2+4lnK+1ϵ)
3ϵn+6ϵ(4lnK+1ϵ)
5ϵn,

where we use that τn, ϵ2ln(1+ϵ), and ln(1+1ϵ)1/ϵ, and where the last inequality follows from n=24kϵ3 so that

6ϵ(4lnK+1ϵ)6ϵ(4k+1ϵ)=ϵ2n+ϵ4n<2ϵn.

This completes the proof of Lemma 12.

4.3 The (𝟐,𝟑𝟕ϵ)-List Decodability of 𝗦𝗪

We now return to our main goal of this section, which is to show that 𝖲𝖶 is 2-list decodable up to radius 37. We restate the theorem below for convenience.

See 9

Proof.

Consider the first round nf at which 𝗉𝗈𝗌𝖼nf(4)>(372ϵ)n. There must be such a round because 12𝗉𝗈𝗌𝖼n(1)+𝗉𝗈𝗌𝖼n(2)+𝗉𝗈𝗌𝖼n(3)+𝗉𝗈𝗌𝖼n(4)32n5ϵn by Lemma 12, so 𝗉𝗈𝗌𝖼n(4)27(32n7ϵn)=(372ϵ)n. After round nf, there only remains 3 relevant coins upon the board. Our goal is to show that another falls off.

Suppose that in round nf that the first coin is at position (372ϵ)np. From Lemma 11, we know that

𝗉𝗈𝗌𝖼nf(2)+𝗉𝗈𝗌𝖼nf(3) 𝗉𝗈𝗌𝖼nf(1)+𝗉𝗈𝗌𝖼nf(4)1
=67np4ϵn1
67np5ϵn. (4)

We also know from Lemma 12 that

𝗉𝗈𝗌𝖼nf(2)+𝗉𝗈𝗌𝖼nf(3) 32(nf)5ϵn12𝗉𝗈𝗌𝖼nf(1)𝗉𝗈𝗌𝖼nf(4)
32(nf)5ϵn12((372ϵ)np)(372ϵ)n
=67n32f+12p2ϵn. (5)

Adding 13 of Equation 4 and 23 of Equation 5, we get

𝗉𝗈𝗌𝖼nf(2)+𝗉𝗈𝗌𝖼nf(3)67nf3ϵn.

Now, suppose that the third coin does not reach position (373ϵ)n by the end of the game. Then, at least one of the second and third coins must move in each of the last f rounds, so that 𝗉𝗈𝗌𝖼t(2)+𝗉𝗈𝗌𝖼t(3) increases by at least 1 each round. This means that at round n,

𝗉𝗈𝗌𝖼n(2)+𝗉𝗈𝗌𝖼n(3) 𝗉𝗈𝗌𝖼nf(2)+𝗉𝗈𝗌𝖼nf(3)+f
(673ϵ)n,

which gives that 𝗉𝗈𝗌𝖼n(3)12(673ϵ)n>(372ϵ)n, a contradiction on our assumption that the third coin never falls off the board.

Therefore, no matter which sets the adversary chooses, there will always be at most two coins left on the board at the end of the game.

5 Spencer-Winkler Does Not Give (𝟑,𝟕𝟏𝟓)-List Feedback Codes

In Section 4, we showed that the Spencer-Winkler strategy for Alice and Bob achieves a (2,37)-list feedback code. In this strategy, Bob always splits the sets into the even-numbered coins and the odd-numbered coins.

A reasonable conjecture might be that this strategy is always optimal, and, in fact, we do not know of any proof that it is not. However, in this section we will show that for =3, it does not match our upper bound of 715 from Section 3. Whether this is because the protocol is suboptimal or because the upper bound is not tight remains open for further investigation.

See 3

As in the previous sections, we discuss the counterexample in the coin game formulation. We will show an attack for all large enough k. In this setting, Bob has K=2k coins for some k>1000ε1, and in each of n rounds (Alice and Bob choose n), he partitions the coins into 2 sets. Then, Eve chooses one of the two sets in which to move all the coins up by 1. Eve wins if, at the end of the protocol, at least 4 coins remain 3167n.

5.1 Eve’s Strategy

We begin by stating Eve’s procedure for determining which set of coins to select at each step. For simplicity, we’ll assume that n is a multiple of 67 and write n=67q. We’ll further assume q is even and εq is an integer.

Protocol 2 : Eve’s Procedure

Alice has a coin x{0,1}k. She and Bob simulate the coin game with K=2k coins. For the n rounds, Eve does the following:

  1. 1.

    For the first 32q rounds, Eve selects (moves up by 1) the even coins.

  2. 2.

    For the next 16q rounds, Eve selects the odd coins.

  3. 3.

    For the next 8q rounds, Eve selects the even coins.

  4. 4.

    For the next 4q rounds, Eve selects the odd coins.

  5. 5.

    For the next 4q rounds, Eve selects the even coins.

  6. 6.

    For the final 3q rounds, Eve selects the odd coins.

5.2 Analysis of Eve’s Strategy

We will establish our main sequence of claims giving an upper bound on the positions of the first few coins after each of these steps (and a step in between as well). The claims are written out below and summarized by the following table. For each upper bound, we additionally add +εq, although we do not write it into the table for brevity.

    𝗉𝗈𝗌𝖼m(1) 𝗉𝗈𝗌𝖼m(2) 𝗉𝗈𝗌𝖼m(3) 𝗉𝗈𝗌𝖼m(4) 𝗉𝗈𝗌𝖼m(5)
(even) m=32q     0 () 16q () 16q () 16q () 16q ()
(odd) m=48q     16q ( ) 16q () 24q () 24q () 24q ()
(even) m=56q     16q () 24q ( ) 24q () 28q () 28q ()
(odd) m=60q     20q ( ) 24q () 28q ( ) 28q () 30q ()
(even) m=62q     20q () 26q ( ) 28q () 30q ( ) 30q ()
(even) m=64q     20q () 28q ( ) 28q () 31q () 31q ()
(odd) m=67q     23q ( ) 28q () 31q ( ) 31q () 33.5q ()
Figure 3: An upper bound on the positions of the first 5 coins after m rounds. To each listed position bound, additionally add +εq. Eve’s selection of odd or even is also noted. Symbols in the cells are referenced in the proofs below.

Establishing the bounds in the table would show that 𝗉𝗈𝗌𝖼m(4)31q+εq<(3167+ε)n at the end of the n rounds, which establishes Theorem 3.

Now, we will prove the bounds listed in the table. Each argument below will establish the bounds indicated by the corresponding symbol. Each argument requires only that the bounds in the previous line of the table hold, so we can prove the bounds in a top-down order.

To show the () bounds, note that selecting the even coins never moves the coin in position 1.

Next, we establish the () bounds with the following claim. It shows us that for all times m, for i6 we have that 𝗉𝗈𝗌𝖼m(i)<12m+εq

Claim 15.

For any round m, as long as k is sufficiently large, it holds that 𝗉𝗈𝗌𝖼m(6)<12m+εq.

Proof.

Since at all steps, only the odd coins or even coins move (and there are an even number of coins total), it holds that

𝗉𝗈𝗌𝖼m(1)++𝗉𝗈𝗌𝖼m(K)12Km

Then,

12Km>𝗉𝗈𝗌𝖼m(6)++𝗉𝗈𝗌𝖼m(K)(K6)𝗉𝗈𝗌𝖼m(6)

which implies that

𝗉𝗈𝗌𝖼m(6)<12KmK6<12m+εq

where the last line holds if K=2k is sufficiently large, for example larger than 1000ε1.

To show the ( ) bounds, note the following claim.

Claim 16.

For all i[K], between rounds t and t, it holds that 𝗉𝗈𝗌𝖼t(i)𝗉𝗈𝗌𝖼t(i)tt.

Proof.

All of coins ct(1),,ct(i) must have have position at most 𝗉𝗈𝗌𝖼t(i) after round t. They can all move at most tt by round t, so there are at least t coins at position 𝗉𝗈𝗌𝖼t(i)+(tt) or earlier by the end of round t, which is what we desired.

Next, we establish the () bounds with the following claim.

Claim 17.

For all coins i[K1] and times t and t, if the set corresponding to parity i+1 is not chosen in that interval, then 𝗉𝗈𝗌𝖼t(i+1)max{𝗉𝗈𝗌𝖼t(i+1),𝗉𝗈𝗌𝖼t(i)+tt}.

Proof.

Let d:=𝗉𝗈𝗌𝖼t(i+1)𝗉𝗈𝗌𝖼t(i). For the first d steps then, the coins ct(1),,ct(i) are all necessarily before the coin ct(i+1). Then, 𝗉𝗈𝗌𝖼t(i+1) does not increase for those d rounds (if tt<d, then simply stop at round t. Then, by Claim 16, in the remaining max{ttd,0} rounds, 𝗉𝗈𝗌𝖼t(i+1) increases by at most 1. Therefore, in total, the final position of ct(i+1) after round t is max{ttd+𝗉𝗈𝗌𝖼t(i+1),𝗉𝗈𝗌𝖼t(i+1)}. is This gives the desired bound because ttd+𝗉𝗈𝗌𝖼t(i+1)tt+𝗉𝗈𝗌𝖼t(i)𝗉𝗈𝗌𝖼t(i+1)+𝗉𝗈𝗌𝖼t(i+1)=tt+𝗉𝗈𝗌𝖼t(i) as desired.

Finally, we establish the () bounds specifically for 𝗉𝗈𝗌𝖼m(4) and 𝗉𝗈𝗌𝖼m(5) between rounds 62q and 64q.

Claim 18.

After round m+d where m=62q and d is even and d2q, it holds that 𝗉𝗈𝗌𝖼m+d(5)30q+d/2+εq.

Proof.

We will show this statement inductively. This holds when d=0 as the base case.

Assume this is true for d, and we’ll prove it for d+2. By Claim 16, 𝗉𝗈𝗌𝖼(5) can increase by at most 2 in these 2 steps, so we only have to address the case where 𝗉𝗈𝗌𝖼m+d(5)=30q+d/2+εq exactly. Moreover, we must have 𝗉𝗈𝗌𝖼m+d(4)=30q+d/2+εq or 𝗉𝗈𝗌𝖼m+d(4)=30q+d/2+εq exactly, otherwise we would be done by Claim 17.

All the coins cm(1),cm(2),cm(3) are in position at most 28q+d+εq<30q+d/2+εq10 at the end of d steps, so they cannot surpass cm+d(4) during these next 2 steps.

In both the cases where 𝗉𝗈𝗌𝖼m+d(4)=30q+d/2+εq and 𝗉𝗈𝗌𝖼m+d(4)=30q+d/2+εq, both the 4th and 5th positions move up by exactly 1 if the first three coins are guaranteed to be out of the picture.

References

  • [1] Rudolf Ahlswede, Ferdinando Cicalese, Christian Deppe, and Ugo Vaccaro. Two batch search with lie cost. IEEE transactions on information theory, 55(4):1433–1439, 2009. doi:10.1109/TIT.2009.2013014.
  • [2] Omar Alrabiah, Venkatesan Guruswami, and Ray Li. Randomly punctured reed–solomon codes achieve list-decoding capacity over linear-sized fields. In Proceedings of the 56th Annual ACM Symposium on Theory of Computing, pages 1458–1469, 2024. doi:10.1145/3618260.3649634.
  • [3] Elwyn R. Berlekamp. Block coding with noiseless feedback. PhD thesis, Massachusetts Institute of Technology, 1964.
  • [4] Elwyn R. Berlekamp. Block coding for the binary symmetric channel with noiseless, delayless feedback. Error-correcting Codes, pages 61–88, 1968.
  • [5] Volodia M Blinovskiı. Bounds for codes in decoding by a list of finite length. Problemy Peredachi Informatsii, 22(1):11–25, 1986.
  • [6] Joshua Brakensiek, Sivakanth Gopi, and Visu Makam. Generic reed-solomon codes achieve list-decoding capacity. In Proceedings of the 55th Annual ACM Symposium on Theory of Computing, pages 1488–1501, 2023. doi:10.1145/3564246.3585128.
  • [7] Christian Deppe. Solution of ulam’s searching game with three lies or an optimal adaptive strategy for binary three-error-correcting codes. Discrete Mathematics, 224(1-3):79–98, 2000. doi:10.1016/S0012-365X(00)00109-6.
  • [8] Christian Deppe. Coding with feedback and searching with lies. In Entropy, Search, Complexity, pages 27–70. Springer, 2007.
  • [9] Peter Elias. List decoding for noisy channels. Massachusetts Institute of Technology, 1957.
  • [10] Ran Gelles. Coding for Interactive Communication: A Survey. Foundations and Trends® in Theoretical Computer Science, 13:1–161, January 2017. doi:10.1561/0400000079.
  • [11] Zeyu Guo and Zihan Zhang. Randomly punctured reed-solomon codes achieve the list decoding capacity over polynomial-size alphabets. In 2023 IEEE 64th Annual Symposium on Foundations of Computer Science (FOCS), pages 164–176. IEEE, 2023. doi:10.1109/FOCS57990.2023.00019.
  • [12] Meghal Gupta, Venkatesan Guruswami, and Rachel Yun Zhang. Binary error-correcting codes with minimal noiseless feedback. In Proceedings of the 55th Annual ACM Symposium on Theory of Computing, STOC 2023, pages 1475–1487, New York, NY, USA, 2023. Association for Computing Machinery. doi:10.1145/3564246.3585126.
  • [13] Venkatesan Guruswami. List decoding of error correcting codes. PhD thesis, Massachusetts Institute of Technology, 2001. URL: https://api.semanticscholar.org/CorpusID:9164141.
  • [14] Venkatesan Guruswami. List decoding of binary codes–a brief survey of some recent results. In Coding and Cryptology: Second International Workshop, IWCC 2009, Zhangjiajie, China, June 1-5, 2009. Proceedings 2, pages 97–106. Springer, 2009. doi:10.1007/978-3-642-01877-0_10.
  • [15] Venkatesan Guruswami. Linear-algebraic list decoding of folded reed-solomon codes. In 2011 IEEE 26th Annual Conference on Computational Complexity, pages 77–85. IEEE, 2011. doi:10.1109/CCC.2011.22.
  • [16] Venkatesan Guruswami et al. Algorithmic results in list decoding. Foundations and Trends® in Theoretical Computer Science, 2(2):107–195, 2007.
  • [17] Venkatesan Guruswami, Ray Li, Jonathan Mosheiff, Nicolas Resch, Shashwat Silas, and Mary Wootters. Bounds for list-decoding and list-recovery of random linear codes. arXiv preprint arXiv:2004.13247, 2020. arXiv:2004.13247.
  • [18] Venkatesan Guruswami and Srivatsan Narayanan. Combinatorial limitations of average-radius list-decoding. IEEE Transactions on Information Theory, 60(10):5827–5842, 2014. doi:10.1109/TIT.2014.2343224.
  • [19] Venkatesan Guruswami and Atri Rudra. Explicit codes achieving list decoding capacity: Error-correction with optimal redundancy. IEEE Transactions on information theory, 54(1):135–150, 2008. doi:10.1109/TIT.2007.911222.
  • [20] Wojciech Guzicki. Ulam’s searching game with two lies. Journal of Combinatorial Theory, Series A, 54(1):1–19, 1990. doi:10.1016/0097-3165(90)90002-E.
  • [21] Bernhard Haeupler, Pritish Kamath, and Ameya Velingker. Communication with Partial Noiseless Feedback. In APPROX-RANDOM, 2015.
  • [22] S Muthukrishnan. On optimal strategies for searching in presence of errors. In Proceedings of the fifth annual ACM-SIAM symposium on discrete algorithms, pages 680–689, 1994. URL: http://dl.acm.org/citation.cfm?id=314464.314672.
  • [23] Alberto Negro, Giuseppe Parlati, and P Ritrovato. Optimal adaptive search: reliable and unreliable models. In Proc. 5th Italian Conf. on Theoretical Computer Science, pages 211–231. World Scientific, 1995.
  • [24] Andrzej Pelc. Solution of ulam’s problem on searching with a lie. Journal of Combinatorial Theory, Series A, 44(1):129–140, 1987. doi:10.1016/0097-3165(87)90065-3.
  • [25] Andrzej Pelc. Searching games with errors—fifty years of coping with liars. Theoretical Computer Science, 270(1-2):71–109, 2002. doi:10.1016/S0304-3975(01)00303-6.
  • [26] Morris Plotkin. Binary codes with specified minimum distance. IRE Transactions on Information Theory, 6(4):445–450, 1960. doi:10.1109/TIT.1960.1057584.
  • [27] A. Renyi. On a problem of information theory. In MTA Mat. Kut. Int. Kozl., volume 6B, pages 505–516, 1961.
  • [28] Nicolas Resch, Chen Yuan, and Yihan Zhang. Zero-rate thresholds and new capacity bounds for list-decoding and list-recovery. IEEE Transactions on Information Theory, 2024.
  • [29] Eric Ruzomberka, Yongkyu Jang, David J. Love, and H. Vincent Poor. The capacity of channels with o(1)-bit feedback. In 2023 IEEE International Symposium on Information Theory (ISIT), pages 2768–2773, 2023. doi:10.1109/ISIT54713.2023.10206714.
  • [30] Ofer Shayevitz. On error correction with feedback under list decoding. In IEEE International Symposium on Information Theory, ISIT 2009, June 28 - July 3, 2009, Seoul, Korea, Proceedings, pages 1253–1257, August 2009. doi:10.1109/ISIT.2009.5205965.
  • [31] Joel Spencer and Peter Winkler. Three Thresholds for a Liar. Combinatorics, Probability and Computing, 1(1):81–93, 1992. doi:10.1017/S0963548300000080.
  • [32] Madhu Sudan. List decoding: Algorithms and applications. ACM SIGACT News, 31(1):16–27, 2000. doi:10.1145/346048.346049.
  • [33] Stanislaw M Ulam. Adventures of a Mathematician. Univ of California Press, 1991.
  • [34] John M. Wozencraft. List decoding, volume 48. Massachusetts Institute of Technology, 1958.
  • [35] K.Sh. Zigangirov. Number of correctable errors for transmission over a binary symmetrical channel with feedback. Problems Inform. Transmission, 12:85–97, 1976.