List Decoding Bounds for Binary Codes with Noiseless Feedback
Abstract
In an error-correcting code, a sender encodes a message such that it is still decodable by a receiver on the other end of a noisy channel. In the setting of error-correcting codes with feedback, after sending each bit, the sender learns what was received at the other end and can tailor future messages accordingly.
While the unique decoding radius of feedback codes has long been known to be , the list decoding capabilities of feedback codes is not well understood. In this paper, we provide the first nontrivial bounds on the list decoding radius of feedback codes for lists of size . For , we fully determine the -list decoding radius to be . For larger values of , we show an upper bound of , and show that the same techniques for the case cannot match this upper bound in general.
Keywords and phrases:
error-correcting codes, feedback, list decodingFunding:
Meghal Gupta: Supported by a NSF GFRP Fellowship.Copyright and License:
2012 ACM Subject Classification:
Mathematics of computing Coding theoryEditors:
Raghu MekaSeries and Publisher:
Leibniz International Proceedings in Informatics, Schloss Dagstuhl – Leibniz-Zentrum für Informatik
1 Introduction
Consider the problem of error-correcting codes with feedback [3]. Alice wishes to communicate a message to Bob by transmitting a sequence of bits over a noisy channel. After sending each bit, she receives feedback through a noiseless channel, telling her what bit Bob actually received. She can use this feedback to adaptively decide her next bit of communication, potentially allowing for improvement over classical error-correcting codes. As in the classical setting, the goal is to communicate in a way such that even if some fraction of Alice’s bits are adversarially corrupted, Bob can still correctly determine .
For classical error-correcting codes without feedback, the Plotkin bound [26] states that Bob cannot uniquely decode if more than of the communication is adversarially corrupted. In other words, the unique decoding radius for binary error-correcting codes is .111In this introduction, when we state a resilience threshold/radius , we mean that for any there exists a protocol resilient to errors, with rate . For sake of simplicity, we’ve omitted all ’s from this introduction. The corresponding question for codes with feedback was first posed by Berlekamp in his 1964 thesis, “Block Coding with Noiseless Feedback”: What is the maximum fraction of errors from which a feedback-based error-correcting code can still uniquely be decoded? Remarkably, he proved that in the feedback setting, the unique decoding radius increases to , surpassing the Plotkin bound.
While Berlekamp’s result tells us the unique decoding radius of feedback codes, little is known about the list-decoding case. In list-decoding, first introduced by Elias [9] and Wozencraft [34], the goal is for Bob to output a small list of messages including Alice’s. The concept of list decoding for non-feedback codes has proven to be quite illuminating for understanding the Hamming space and for various other applications (see e.g. [13] for further discussion).
In the classical setting, list decoding is useful because it can be done at a larger error radius than unique decoding. In fact, the optimal list decoding radius is known for (non-feedback) codes. For decoding into a list of size , the optimal radius is known to be where , which is asymptotically [5]. This is achieved by positive rate random codes.
In this paper, we investigate the analogous question for feedback codes, first posed in [30]. In a feedback code, what is the maximum fraction of adversarial errors such that Bob can always learn a list of size containing Alice’s message? We denote such a code as a -list decodable feedback code. In our work, we provide the first non-trivial unconditional lower and upper bounds for this question.
1.1 Results And Further Discussion
Our first result gives an upper bound on the error resilience (decoding radius) of an -list decodable feedback code.
Theorem 1.
For any positive integer , there is no -list feedback code.
Prior to this work, the only known bound on the list decoding radius of feedback codes, aside from the trivial upper bound of , was given in [30]. That result showed upper bounds for a specific subclass of feedback codes that satisfy a “generalized translation” property. It turns out this generalized translation property is not very comprehensive: we will in fact demonstrate a code that breaks the upper bound proven in [30]. In contrast, our upper bound is unconditional and applies to all feedback codes.
In the case of , Theorem 1 provides an upper bound of on the unique decoding radius, consistent with the result in [3]. The key question, however, is whether this bound is tight in general. Although we do not have an answer for arbitrary , we are able to show it is true in the specific case of . Specifically, we show that for any , there exists a -list feedback code, matching the upper bound established by Theorem 1.
Theorem 2.
For any , there is a -list feedback code where .
The particular feedback code we used to achieve -list decodability is the same code found in [31], which Spencer and Winkler showed achieves unique decoding up to radius . The code, which we’ll denote , is defined in Section 4. Compared to the -unique decodability analysis in [31], however, our analysis is significantly more involved, requiring us to characterize the likelihood of four inputs rather than one possible input at once.
A reasonable conjecture at this point is that the protocol will always give a -list decodable code. This would show that Theorem 1 is optimal.
Unfortunately, this turns out not to be true. We show that for , the error resilience of the Spencer-Winkler protocol does not match the upper bound given in Theorem 1. There is an adversarial corruption strategy on that requires only fraction of corruptions to confuse Bob between four different inputs, whereas the bound given by Theorem 1 is .
Theorem 3.
For any , the feedback code (defined in Section 4) is not -list decodable for any choice of encoding length .
Although this result states that the feedback code is not -list decodable, it remains unknown whether a different feedback code achieves this bound or whether the upper bound can be improved.
Even the asymptotic behavior for this question remains unknown. Our upper bound suggests that the feedback list decoding radius could approach exponentially fast with respect to the list size . That is, the radius for -list decoding is given by . We conjecture that this exponential bound is in fact optimal for the feedback case and propose proving or disproving this as an interesting direction for future research.
In the case of standard ECC’s (without feedback), as mentioned earlier, the optimal list decoding radius is for [5]. That is, for any , there exist -list decodable codes. Asymptotically, this bound says that the list decoding radius is approximately , which approaches much slower than the conjectured radius for the feedback case. Even short of proving the exponential bound, demonstrating a separation between the asymptotics for the feedback and non-feedback cases would be an intriguing result.
1.2 Related Works
1.2.1 Error Correcting Codes with Feedback
We already mentioned the seminal work of [3] which introduced the concept of error-correcting codes with feedback. Berlekamp studied adversarial corruption strategies and proved that was the optimal error resilience for a feedback ECC in the face of adversarial errors. Feedback ECC’s have also been studied for the binary symmetric channel, where every bit is flipped independently with some fixed probability [4, 35].
Particularly relevant is the work of [30], who initiated the study of list decoding feedback codes and gave some preliminary bounds on the list decoding capacities and radii of codes with noiseless feedback. Using generalizations of Berlekamp’s methods, he provides upper bounds on list-decoding capacities for strategies that satisfy what he call a “generalized translation” property. By contrast, our upper bounds make no assumptions about the types of protocols allowed and are fully general.
Many other works on error-correcting codes with feedback present themselves instead under the lens of adaptive search with lies. The two setups are equivalent, and our results imply results in the field of adaptive search with lies. We discuss the relationship below.
Adaptive Search with Lies
An equivalent way of looking at error-correcting codes with unlimited feedback is as follows: After every message from Alice, Bob asks a yes or no question about Alice’s input, and Alice’s next bit is a response to that question. Then, the adversary can corrupt Alice’s responses, but only up to some fraction of the time. Bob is “searching” for Alice’s input in a range by asking these yes/no questions, and receiving an incorrect answer up to an fraction of the time. This game is known as the Rényi-Ulam game, independently introduced by Rényi in [27] and Ulam [33].
Two excellent surveys on the research in this area are presented in [8, 25]. There are a number of works that aspire to improve upon [3]. For example, [22] presents a strategy with optimal rate (asking the least possible number of questions) while still achieving the best error-correction threshold. An alternative construction meeting the same optimal error resilience was given in [31]. We mention that some works focus on a fixed (constant) number of lies rather than resilience to a fraction of errors, and ask for the smallest number of questions necessary. [24] shows the optimal bound for one lie, the original Rényi-Ulam game. Followups find the optimal strategy for two and three lies as well [20, 7].
Limited Feedback/Questions
The works of [23, 1] study the minimal number of rounds, or batches, of questions Bob needs to ask when conducting adaptive search in the Ulam-Renyi model. The work of [25] poses the open question of how many batches of feedback are necessary before the optimal strategy becomes essentially the same as the fully adaptive setting.
In a more code-centric view, this model corresponds to a feedback code with limited feedback, where Alice receives feedback only a few times throughout the protocol instead of after every bit she sends. [21] studied the case where the feedback Alice receives consists of a small fraction of Alice’s rounds. [12] took this direction even further, showing that a constant number of rounds of feedback consisting of only bits is enough to achieve the same unique decoding radius as for the unlimited feedback case. [29] showed that with bits of feedback, one can achieve the zero-error capacity of the error channel with full noiseless feedback.
1.2.2 List Decoding
The notion of list decoding was introduced by Elias [9] and Wozencraft [34]. The goal is for Bob to output a small list of messages including Alice’s. Although they introduced list-decoding in the setting of random error models, the main focus since then has been for adversarial error models. See [32, 16, 14] for surveys on the topic.
One main advantage is that list decoding enables correcting up to a factor two more worst-case errors compared to unique decoding. As we stated earlier, the optimal tradeoff between the list size and error tolerance (in the classical non-feedback setting) is known [5]. This work shows that there are positive rate codes achieving this maximum possible error tolerance. An analogous result for -ary codes was shown in [28].
Another advantage of list-decoding is that it allows for better rate codes. One can ask what the optimal tradeoff between the error tolerance and the rate of the code is, when the list size is required to small or constant, but not necessarily achieving the optimal bounds above. In the large alphabet setting, a simple random coding argument shows that is achievable. The work of [19] provides an explicit positive rate efficiently encodable/decodable code for this task using folded Reed Solomon codes. This research has since been expanded upon, including in [15, 6, 11, 2].
2 Definitions
Before stating our results, let us formally define a list decodable feedback ECC.
Definition 4 (-List Decodable Feedback Code).
A -list feedback code is a family of protocols from Alice to Bob defined as follows for each .
-
Alice begins with an input that she intends to communicate to Bob.
-
In each of rounds, Alice sends a single bit that may be flipped (by an adversary). At the end of the round, Alice learns whether the bit was flipped.
-
At the end of the protocol, Bob must output a list of size that must contain , as long as at most bits were flipped throughout the protocol.
The Coin Game Reformulation
Throughout the remainder of the paper, we’ll argue our results about list decodable feedback ECC’s in the language of a coin game played between two players Bob and Eve. To this end, we reformulate the task of communicating over a noisy channel with feedback as this aforementioned coin game. A version of this coin game reformulation was first given by [31] but is quite similar to the adaptive search with lies formulation of [33] and [27].
Definition 5 (-Coin Game).
In the coin game parametrized by and , two players Bob and Eve are playing a game on a number line. At the beginning of the game, all coins are at position . In each of rounds, Bob partitions the coins into two sets, and Eve chooses a set for which Bob moves all the coins up by .
Eve wins if, at the end of the rounds, there are coins all with positions , and Bob wins otherwise.
The connection to feedback codes is as follows. The coins correspond to all possible values of Alice’s input. For each of the rounds of the feedback code, Bob partitions the coins into two sets based on the two possibilities and for Alice’s next message. Based on the bit that he receives, he increments the position of all coins in the opposite set; that is, the coins for which Alice would have sent the opposite bit. In this manner, at any point in time, each coin’s position is the number of corruptions that must have occurred so far if the given coin were actually Alice’s input.
Let’s say we want a scheme that is -list decodable. Bob wins if at the end of the rounds, there are at most coins left with positions (we say all other coins have “fallen off the board”). Eve’s goal, on the other hand, is to pick a set that Bob hears each round so that at the end of the game there are coins left on the board. If she’s able to pick any sequence of sets such that coins are left on the board at the end of the game, then the resulting code is necessarily not -list decodable, since any of the coins left on the board could’ve been Alice’s input.
Notation
Let us define the notation that we will use for a coin game.
-
For a coin , denotes the position of coin after rounds.
-
denotes the coin with the ’th smallest position at rounds.
-
For , denotes the position of the coin with the ’th smallest position after rounds.
3 Upper Bound
In this section, we show an upper bound (impossibility result) that no -list feedback code can achieve an error resilience greater than . Let us state the main theorem of this section.
See 1
In the formulation of the coin game, we wish to establish that in an -round protocol with coins, the adversary can always maintain coins below position for any sufficiently large compared to . From this, we formally conclude Theorem 1 in Section 3.3.
We begin by establishing the statement for ; that is, the adversary can always maintain coins at or below position . This statement was known as early as [3] when the concept of feedback codes was introduced, but we recreate a proof here. Our proof for general is inductive and therefore requires establishing the case first.
3.1 Base Case: 1-List Feedback Codes
We want to show that the adversary can always maintain the second coin at or below position . However, in order to use our statement as an appropriate base case for the inductive hypothesis in Section 3.2, we actually show the following stronger statement about partially completed coin games.
Consider a partially completed coin game where of the rounds have been played already. Then, the adversary can limit in terms of these variables, specifically by the expression . In particular, the case where corresponds to the situation where the entire coin game is remaining, and indeed the expression says that is at most .
Theorem 6.
For any , the following holds for sufficiently large . Consider an -round coin game on coins in which rounds have already been completed. Then, the adversary can ensure at the end of the rounds that
Before proving this, let us informally describe the strategy Eve uses to achieve this bound. This strategy is essentially identical to what has appeared in previous literature such as [10]. For most of the protocol, she simply chooses the smaller (fewer elements) of the two sets given by Bob. That way, she need only move up one coin, and so on average, a coin move up by , which is what gives the term in the stated bound. However, the issue is that the coins need not move evenly. For example, the first coin may not move at all, while the last two coins each move by . Then, the position of the second coin increases too much, but in compensation, we have some leeway with the first coin. To use this, once the first coin is sufficiently low in position that it can never surpass the second coin, Eve switches strategies to always choosing the set that does not contain the second coin. Now, although the average coin can move up more per round, the second coin no longer moves which is what we wanted, and we do not risk the first coin catching up to the second by doing this.
Proof.
Let . In each round , Bob chooses the set and its complement (both subsets of the 3 coins in the game), and the adversary Eve needs to choose one set for which to move up the coins by each. Eve uses the following strategy.
For a round , while , Eve always chooses the smaller of and , that is the one with fewer elements. (Note that the inequality in this condition is strict, so, for example, if , it is possible to skip this step entirely.)
Once reaches , then Eve instead always selects the set not containing the coin in the second position.
Let us say this transition between the two strategies occurs at round , so Eve spends rounds through inclusive doing the first strategy, and the rest of the protocol doing the second strategy. (If this round never occurs, then at the end of the protocol, as we desired.) Our goal will be to show that in the remaining rounds, the first coin will not pass the second coin, even if it is selected to increase every time. Then there will be two coins left at the end of the protocol, both with position .
Between rounds and , in every step, the total positions of the coins are increasing by at most . Also, the first time the inequality is violated, we must have . Therefore, at the end of round , it holds that
In other words, in the remaining rounds, the first coin will not be able to switch positions with the second or third coin. As such, will never increase since it is never selected, and the first coin never surpasses it.
This concludes the proof, as it means that stays at (or never reached ).
3.2 Inductive Step: -List Feedback Codes
In this section, we establish the inductive step. We inductively show an analog of Theorem 6 for larger list sizes.
Theorem 7.
For any positive integer , the following holds for sufficiently large . Consider an -round coin game on coins in which rounds have already been completed. For round , let
If then Eve can ensure at the end of the rounds that .
Again, before proving this formally, let us describe Eve’s strategy at a high level. Her strategy to achieve this bound will be recursive. At the start of the protocol, she chooses the smaller of Bob’s two sets at every step, so that the average coin moves by , giving the term in the bound. As with the case, this doesn’t give the desired bound for the final position of the ’th coin because it’s possible the first few coins did not move at all, while the ’th coin moved by more. However, it can be shown that in compensation, the position of the first coin must be very low in this case. Once the first coin is at a low enough position that even if it is selected in every remaining round, it can no longer exceed by the end of the protocol, Eve switches strategies. Now, she needs to ensure that of the remaining coins, of them remain below position (since we have already found one coin that will definitely remain below ). This is where her strategy becomes recursive: she simulates a new coin game on the leftmost coins of the remaining coins and plays on these coins according to her (recursively described) strategy for the case of .
Proof.
First we check that the statement holds for the base case . This follows from Theorem 6. In this case, we have that
In the last step, we used the condition in the Theorem 7 that . Rearranging, this implies that . Theorem 6 states that , since , which is what we wanted to prove.
For the inductive step, let us assume the statement is true for and prove it for . Throughout the proof, let .
In each round , Bob’s two sets are and its complement , and the adversary Eve needs to choose one set for which to move up the coins by each. Eve uses the following strategy.
For a round , while , Eve always chooses the one of and with fewer elements. Note that the first time it is violated, , even if it is already violated immediately at round by the assumption. Denote the round it is first violated as if it is ever violated.
At this point (after round ) or at the start of the protocol if the condition began by being violated, it is impossible for the coin to ever hit the threshold , since its position can only increase by for each of the remaining rounds. Thus Eve can ignore the first coin, and play a coin game on only the remaining coins, with the goal of keeping below . She also opts to exclude the second half of these coins from her new coin game as well, and simply assumes that they will end up above the threshold. In all, the players are playing a new a coin game on coins which correspond to , with Eve having the goal of keeping below . She executes the strategy for this recursively, keeping bounded by what Theorem 7 says for .
Now, let us analyze this strategy.
First, we address the case where round never occurs. In this case, at the end of the protocol, it holds that . Since Eve is picking the smaller of and each time, it holds that
which is a contradiction, where the last inequality follows because the middle term is at most by definition.
Next, we address the case where round occurs at some time . After round , it holds that exactly, and so
| (1) |
The coin in position at time can never fall off the board, because its position can only increment by in each of the last rounds, and so we can disregard it. We will now apply the inductive hypothesis to a new game played on the coins in positions through at time , corresponding to the coins in positions through at time . We will renumber the coins, so we can view this new game as being played on coins numbered through , where the position functions are given by and . Denote
Our goal is to show that . This will prove the inductive step for the following reasons:
-
We have that , so the condition for the inductive step of the theorem is met.
-
We know that , because coin cannot surpass and all of coins are at most . Moreover, by the inductive hypothesis, we know that , and so as we wanted.
Now, we show the desired statement that . Using the bound (1) we found on , we have that
This concludes the inductive step of Theorem 7.
3.3 Conclude Theorem 1
Finally, let us conclude Theorem 1. Let the number of coins be at least . For Eve’s first move, have her choose the one of and (Bob’s two provided sets) with fewer elements, so that at least half the coins remain at position . For her second move, have her choose the one of and that keeps the most coins at , which will keep at leat of the coins at . She will do this one more time, so that after moves, there are at least coins at position .
Even if there are more than coins at position , from now, Eve will fix a subset of coins to play a coin game on. She wants to show that in this new turn coin game, she can keep . This follows by setting in Theorem 7. The theorem states that Eve can always guarantee that coins stay at or below position . Therefore -list decoding is not possible.
4 Optimal 2-List Feedback Codes
In this section, we will prove that the feedback ECC from Spencer-Winkler [31] achieves -list decodability, matching the upper bound for given in Theorem 1.
4.1 The Spencer-Winkler Coin Game
Let us begin by defining the Spencer-Winkler strategy for the coin game.
Protocol 1 : The Spencer-Winkler () Coin Game
Alice has a coin . Bob and Eve play the coin game with coins. In each of rounds, Bob partitions the coins into two sets based on the parity of their positions on the board. That is, he numbers the coins from to , where is assigned to the coin with the smallest position, and is the coin that is furthest away. The odd numbered coins are in one set, and the even numbered coins are in the other. Eve chooses a set whose coins to move up by .
We will also denote by the feedback code corresponding to this family of coin game strategies. When defining a feedback code , we must also specify a choice of the code length . It was shown in [31] that achieves the unique decoding bound of for feedback codes.
Theorem 8 ([31]).
Let . Then in the coin game, no matter which sets Eve chooses each round, it will hold that
Thus the coin game gives a feedback code that is uniquely decodable up to radius .
Our goal through the rest of this section is to prove a statement about the list decodability of to list size . This proves Theorem 2.
Theorem 9.
In the coin game, no matter which sets Eve chooses each round, it holds that
That is, the coin game with gives a feedback code that is -list-decodable.
4.2 Some Useful Inequalities
Throughout this section, let . Our proof that the protocol achieves -list decoding follows from two inequalities, given in Lemmas 11 and 12. First, let us prove a statement about how the position of the ’th coin at each round can change.
Claim 10.
For any and for any , it holds that
Proof.
The ’th coin at time being at position means that at time there must’ve been coins with positions and coins with positions . This means that the ’th coin at time must’ve had position in the range , from which the claim follows.
Now, we are ready to state our first key lemma, that the gaps between the positions of the first two coins must always be at least the gap between the positions of the third and fourth coins at any timestep.
Lemma 11.
At any point in the protocol, the following inequality holds:
Proof.
We prove this via straightforward induction. Let us assume at time that . This is certainly true at time , since all coins are at position .
By applying Claim 10, it follows that and similarly . This means that can change by at most each round, and we can focus on the relevant case that
We split the induction into two cases. The first is if Bob moves the odd set. In this case, Bob will move up coins and . If , then this means that will decrease by , and will change by at most , so (11) will still hold at time . If , then , and also by (4.2) it must be true that , so as well. Therefore, as desired.
The second case is if Bob moves the even set. In this case, Bob will move up coins and . If , then , and will still be the first, second, and third coins at time . Then , while by Claim 10, so (11) clearly follows. On the other hand, if , then moving coins the second and fourth coins at time is equivalent to moving the third and fourth coins at time , so . If , then , so , and (11) follows. Otherwise if , this means by (4.2) that , and since and by Claim 10, we have that , so again (11) follows.
The second key lemma is about the sum of the positions of the first four coins at any timestep.
Lemma 12.
At any round , the following holds:
Proof.
In order to analyze the positions of four coins collectively, we define a new derivative coin game that we call the quadruple coin game. The coins in this quadruple coin game are four-tuples of coins in the old game, for a total of coins in the new game. Let the set of coins in the original game be denoted , and let the set of quadruple coins be denoted . The position of quadruple coin at time is , where here we assume that the four coins are ordered according to increasing position at time .
We denote by the quadruple coin , and by its position at time . Our goal is to show that in this quaduple coin game, coin will be at position at the end of the protocol.
To do this, we will show that on average, the quadruple coins must be moving at units per round, and furthermore that no coins are left behind. What we mean by the latter is that for any quadruple coin that doesn’t move enough in a given round, there is some earlier coin that is moving more than enough and compensating for the slower movement by the first coin. Precisely, we will do this via a bijection.
Let us first define to be the amount that coin is moved in round . We remark that this is not the value , since after round , the coins may change their orders so that is no longer the ’th coin. We will also define to be the amount that quadruple coin is moved in time . In general, when we write a -tuple, the four numbers should be in increasing order. We can also define the poset relation on -tuples of indices in by if for all .
Claim 13.
At any time there is a bijection of indices such that for any bijected values ,
and furthermore if then .
Proof.
We will define this bijection by defining on tuples for which . For all other tuples that remain unbijected after this process, we can pair them up arbitrarily.
Consider a tuple for which . Let’s suppose that Bob is moving the even coins in round (we’ll define the bijection for the odd case in the same way, except switching the words “even” and “odd” everywhere). The only way can be less than is if there is at most one even index among the four. Let us define the map depending on which of the is even.
It’s straightforward to check that these are all well defined, and that each of the bijected tuples contains four distinct elements. (Recall we always assume .)
Next, we also need to check that for all . To see this, we show how to invert each tuple in the image of the map as we’ve defined so far. The key observation is that the five cases give five different parity tuples for the output tuple: for instance, the first case that are all odd results in an output where the parities are (odd, even, even, even), whereas all the rest of the maps give outputs where two indices are odd and two are even, so we always know how to invert a tuple that is (odd, even, even, even).
Finally, we check that . In the first case that are all odd and , has three even indices, so . In all the other cases, has one even index and has two even indices so .
Now, let us define the potential function
where
and
Since for all at the start of the game, .
Claim 14.
For any time , let us consider any bijected pair under . Define and . Then,
Proof.
We proceed by casework on whether and are greater than or less than .
Case 1: Both and are at least .
-
In this case, and , so
Case 2: .
-
We write for some . Then and . Let us again do casework on whether is larger or smaller than .
Subcase 2.1: .
-
In this case, . Then
where the second inequality follows from and the rearrangement inequality, and the thir inequality follows from .
Subcase 2.2: .
-
In this case, , and . Then
-
Finally, we get our bound on . Let , , , and . We have that
which gives
where we use that , , and , and where the last inequality follows from so that
This completes the proof of Lemma 12.
4.3 The -List Decodability of
We now return to our main goal of this section, which is to show that is -list decodable up to radius . We restate the theorem below for convenience.
See 9
Proof.
Consider the first round at which . There must be such a round because by Lemma 12, so . After round , there only remains relevant coins upon the board. Our goal is to show that another falls off.
Suppose that in round that the first coin is at position . From Lemma 11, we know that
| (4) |
We also know from Lemma 12 that
| (5) |
Adding of Equation 4 and of Equation 5, we get
Now, suppose that the third coin does not reach position by the end of the game. Then, at least one of the second and third coins must move in each of the last rounds, so that increases by at least each round. This means that at round ,
which gives that , a contradiction on our assumption that the third coin never falls off the board.
Therefore, no matter which sets the adversary chooses, there will always be at most two coins left on the board at the end of the game.
5 Spencer-Winkler Does Not Give -List Feedback Codes
In Section 4, we showed that the Spencer-Winkler strategy for Alice and Bob achieves a -list feedback code. In this strategy, Bob always splits the sets into the even-numbered coins and the odd-numbered coins.
A reasonable conjecture might be that this strategy is always optimal, and, in fact, we do not know of any proof that it is not. However, in this section we will show that for , it does not match our upper bound of from Section 3. Whether this is because the protocol is suboptimal or because the upper bound is not tight remains open for further investigation.
See 3
As in the previous sections, we discuss the counterexample in the coin game formulation. We will show an attack for all large enough . In this setting, Bob has coins for some , and in each of rounds (Alice and Bob choose ), he partitions the coins into sets. Then, Eve chooses one of the two sets in which to move all the coins up by . Eve wins if, at the end of the protocol, at least coins remain .
5.1 Eve’s Strategy
We begin by stating Eve’s procedure for determining which set of coins to select at each step. For simplicity, we’ll assume that is a multiple of and write . We’ll further assume is even and is an integer.
Protocol 2 : Eve’s Procedure
Alice has a coin . She and Bob simulate the coin game with coins. For the rounds, Eve does the following:
-
1.
For the first rounds, Eve selects (moves up by ) the even coins.
-
2.
For the next rounds, Eve selects the odd coins.
-
3.
For the next rounds, Eve selects the even coins.
-
4.
For the next rounds, Eve selects the odd coins.
-
5.
For the next rounds, Eve selects the even coins.
-
6.
For the final rounds, Eve selects the odd coins.
5.2 Analysis of Eve’s Strategy
We will establish our main sequence of claims giving an upper bound on the positions of the first few coins after each of these steps (and a step in between as well). The claims are written out below and summarized by the following table. For each upper bound, we additionally add , although we do not write it into the table for brevity.
| (even) | 0 () | 16q () | 16q () | 16q () | 16q () |
|---|---|---|---|---|---|
| (odd) | 16q ( ) | 16q () | 24q () | 24q () | 24q () |
| (even) | 16q () | 24q ( ) | 24q () | 28q () | 28q () |
| (odd) | 20q ( ) | 24q () | 28q ( ) | 28q () | 30q () |
| (even) | 20q () | 26q ( ) | 28q () | 30q ( ) | 30q () |
| (even) | 20q () | 28q ( ) | 28q () | 31q () | 31q () |
| (odd) | 23q ( ) | 28q () | 31q ( ) | 31q () | 33.5q () |
Establishing the bounds in the table would show that at the end of the rounds, which establishes Theorem 3.
Now, we will prove the bounds listed in the table. Each argument below will establish the bounds indicated by the corresponding symbol. Each argument requires only that the bounds in the previous line of the table hold, so we can prove the bounds in a top-down order.
To show the () bounds, note that selecting the even coins never moves the coin in position .
Next, we establish the () bounds with the following claim. It shows us that for all times , for we have that
Claim 15.
For any round , as long as is sufficiently large, it holds that .
Proof.
Since at all steps, only the odd coins or even coins move (and there are an even number of coins total), it holds that
Then,
which implies that
where the last line holds if is sufficiently large, for example larger than .
To show the ( ) bounds, note the following claim.
Claim 16.
For all , between rounds and , it holds that .
Proof.
All of coins must have have position at most after round . They can all move at most by round , so there are at least coins at position or earlier by the end of round , which is what we desired.
Next, we establish the () bounds with the following claim.
Claim 17.
For all coins and times and , if the set corresponding to parity is not chosen in that interval, then .
Proof.
Let . For the first steps then, the coins are all necessarily before the coin . Then, does not increase for those rounds (if , then simply stop at round . Then, by Claim 16, in the remaining rounds, increases by at most . Therefore, in total, the final position of after round is . is This gives the desired bound because as desired.
Finally, we establish the () bounds specifically for and between rounds and .
Claim 18.
After round where and is even and , it holds that .
Proof.
We will show this statement inductively. This holds when as the base case.
Assume this is true for , and we’ll prove it for . By Claim 16, can increase by at most in these steps, so we only have to address the case where exactly. Moreover, we must have or exactly, otherwise we would be done by Claim 17.
All the coins are in position at most at the end of steps, so they cannot surpass during these next steps.
In both the cases where and , both the 4th and 5th positions move up by exactly if the first three coins are guaranteed to be out of the picture.
References
- [1] Rudolf Ahlswede, Ferdinando Cicalese, Christian Deppe, and Ugo Vaccaro. Two batch search with lie cost. IEEE transactions on information theory, 55(4):1433–1439, 2009. doi:10.1109/TIT.2009.2013014.
- [2] Omar Alrabiah, Venkatesan Guruswami, and Ray Li. Randomly punctured reed–solomon codes achieve list-decoding capacity over linear-sized fields. In Proceedings of the 56th Annual ACM Symposium on Theory of Computing, pages 1458–1469, 2024. doi:10.1145/3618260.3649634.
- [3] Elwyn R. Berlekamp. Block coding with noiseless feedback. PhD thesis, Massachusetts Institute of Technology, 1964.
- [4] Elwyn R. Berlekamp. Block coding for the binary symmetric channel with noiseless, delayless feedback. Error-correcting Codes, pages 61–88, 1968.
- [5] Volodia M Blinovskiı. Bounds for codes in decoding by a list of finite length. Problemy Peredachi Informatsii, 22(1):11–25, 1986.
- [6] Joshua Brakensiek, Sivakanth Gopi, and Visu Makam. Generic reed-solomon codes achieve list-decoding capacity. In Proceedings of the 55th Annual ACM Symposium on Theory of Computing, pages 1488–1501, 2023. doi:10.1145/3564246.3585128.
- [7] Christian Deppe. Solution of ulam’s searching game with three lies or an optimal adaptive strategy for binary three-error-correcting codes. Discrete Mathematics, 224(1-3):79–98, 2000. doi:10.1016/S0012-365X(00)00109-6.
- [8] Christian Deppe. Coding with feedback and searching with lies. In Entropy, Search, Complexity, pages 27–70. Springer, 2007.
- [9] Peter Elias. List decoding for noisy channels. Massachusetts Institute of Technology, 1957.
- [10] Ran Gelles. Coding for Interactive Communication: A Survey. Foundations and Trends® in Theoretical Computer Science, 13:1–161, January 2017. doi:10.1561/0400000079.
- [11] Zeyu Guo and Zihan Zhang. Randomly punctured reed-solomon codes achieve the list decoding capacity over polynomial-size alphabets. In 2023 IEEE 64th Annual Symposium on Foundations of Computer Science (FOCS), pages 164–176. IEEE, 2023. doi:10.1109/FOCS57990.2023.00019.
- [12] Meghal Gupta, Venkatesan Guruswami, and Rachel Yun Zhang. Binary error-correcting codes with minimal noiseless feedback. In Proceedings of the 55th Annual ACM Symposium on Theory of Computing, STOC 2023, pages 1475–1487, New York, NY, USA, 2023. Association for Computing Machinery. doi:10.1145/3564246.3585126.
- [13] Venkatesan Guruswami. List decoding of error correcting codes. PhD thesis, Massachusetts Institute of Technology, 2001. URL: https://api.semanticscholar.org/CorpusID:9164141.
- [14] Venkatesan Guruswami. List decoding of binary codes–a brief survey of some recent results. In Coding and Cryptology: Second International Workshop, IWCC 2009, Zhangjiajie, China, June 1-5, 2009. Proceedings 2, pages 97–106. Springer, 2009. doi:10.1007/978-3-642-01877-0_10.
- [15] Venkatesan Guruswami. Linear-algebraic list decoding of folded reed-solomon codes. In 2011 IEEE 26th Annual Conference on Computational Complexity, pages 77–85. IEEE, 2011. doi:10.1109/CCC.2011.22.
- [16] Venkatesan Guruswami et al. Algorithmic results in list decoding. Foundations and Trends® in Theoretical Computer Science, 2(2):107–195, 2007.
- [17] Venkatesan Guruswami, Ray Li, Jonathan Mosheiff, Nicolas Resch, Shashwat Silas, and Mary Wootters. Bounds for list-decoding and list-recovery of random linear codes. arXiv preprint arXiv:2004.13247, 2020. arXiv:2004.13247.
- [18] Venkatesan Guruswami and Srivatsan Narayanan. Combinatorial limitations of average-radius list-decoding. IEEE Transactions on Information Theory, 60(10):5827–5842, 2014. doi:10.1109/TIT.2014.2343224.
- [19] Venkatesan Guruswami and Atri Rudra. Explicit codes achieving list decoding capacity: Error-correction with optimal redundancy. IEEE Transactions on information theory, 54(1):135–150, 2008. doi:10.1109/TIT.2007.911222.
- [20] Wojciech Guzicki. Ulam’s searching game with two lies. Journal of Combinatorial Theory, Series A, 54(1):1–19, 1990. doi:10.1016/0097-3165(90)90002-E.
- [21] Bernhard Haeupler, Pritish Kamath, and Ameya Velingker. Communication with Partial Noiseless Feedback. In APPROX-RANDOM, 2015.
- [22] S Muthukrishnan. On optimal strategies for searching in presence of errors. In Proceedings of the fifth annual ACM-SIAM symposium on discrete algorithms, pages 680–689, 1994. URL: http://dl.acm.org/citation.cfm?id=314464.314672.
- [23] Alberto Negro, Giuseppe Parlati, and P Ritrovato. Optimal adaptive search: reliable and unreliable models. In Proc. 5th Italian Conf. on Theoretical Computer Science, pages 211–231. World Scientific, 1995.
- [24] Andrzej Pelc. Solution of ulam’s problem on searching with a lie. Journal of Combinatorial Theory, Series A, 44(1):129–140, 1987. doi:10.1016/0097-3165(87)90065-3.
- [25] Andrzej Pelc. Searching games with errors—fifty years of coping with liars. Theoretical Computer Science, 270(1-2):71–109, 2002. doi:10.1016/S0304-3975(01)00303-6.
- [26] Morris Plotkin. Binary codes with specified minimum distance. IRE Transactions on Information Theory, 6(4):445–450, 1960. doi:10.1109/TIT.1960.1057584.
- [27] A. Renyi. On a problem of information theory. In MTA Mat. Kut. Int. Kozl., volume 6B, pages 505–516, 1961.
- [28] Nicolas Resch, Chen Yuan, and Yihan Zhang. Zero-rate thresholds and new capacity bounds for list-decoding and list-recovery. IEEE Transactions on Information Theory, 2024.
- [29] Eric Ruzomberka, Yongkyu Jang, David J. Love, and H. Vincent Poor. The capacity of channels with o(1)-bit feedback. In 2023 IEEE International Symposium on Information Theory (ISIT), pages 2768–2773, 2023. doi:10.1109/ISIT54713.2023.10206714.
- [30] Ofer Shayevitz. On error correction with feedback under list decoding. In IEEE International Symposium on Information Theory, ISIT 2009, June 28 - July 3, 2009, Seoul, Korea, Proceedings, pages 1253–1257, August 2009. doi:10.1109/ISIT.2009.5205965.
- [31] Joel Spencer and Peter Winkler. Three Thresholds for a Liar. Combinatorics, Probability and Computing, 1(1):81–93, 1992. doi:10.1017/S0963548300000080.
- [32] Madhu Sudan. List decoding: Algorithms and applications. ACM SIGACT News, 31(1):16–27, 2000. doi:10.1145/346048.346049.
- [33] Stanislaw M Ulam. Adventures of a Mathematician. Univ of California Press, 1991.
- [34] John M. Wozencraft. List decoding, volume 48. Massachusetts Institute of Technology, 1958.
- [35] K.Sh. Zigangirov. Number of correctable errors for transmission over a binary symmetrical channel with feedback. Problems Inform. Transmission, 12:85–97, 1976.
