Search Results

Documents authored by Resch, Nicolas


Document
Two-Round Perfectly Secure Message Transmission with Optimal Transmission Rate

Authors: Nicolas Resch and Chen Yuan

Published in: LIPIcs, Volume 267, 4th Conference on Information-Theoretic Cryptography (ITC 2023)


Abstract
In the model of Perfectly Secure Message Transmission (PSMT), a sender Alice is connected to a receiver Bob via n parallel two-way channels, and Alice holds an 𝓁 symbol secret that she wishes to communicate to Bob. There is an unbounded adversary Eve that controls t of the channels, where n = 2t+1. Eve is able to corrupt any symbol sent through the channels she controls, and furthermore may attempt to infer Alice’s secret by observing the symbols sent through the channels she controls. The transmission is required to be (a) reliable, i.e., Bob must always be able to recover Alice’s secret, regardless of Eve’s corruptions; and (b) private, i.e., Eve may not learn anything about Alice’s secret. We focus on the two-round model, where Bob is permitted to first transmit to Alice, and then Alice responds to Bob. In this work we provide upper and lower bounds for the PSMT model when the length of the communicated secret 𝓁 is asymptotically large. Specifically, we first construct a protocol that allows Alice to communicate an 𝓁 symbol secret to Bob by transmitting at most 2(1+o_{𝓁→∞}(1))n𝓁 symbols. Under a reasonable assumption (which is satisfied by all known efficient two-round PSMT protocols), we complement this with a lower bound showing that 2n𝓁 symbols are necessary for Alice to privately and reliably communicate her secret. This provides strong evidence that our construction is optimal (even up to the leading constant).

Cite as

Nicolas Resch and Chen Yuan. Two-Round Perfectly Secure Message Transmission with Optimal Transmission Rate. In 4th Conference on Information-Theoretic Cryptography (ITC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 267, pp. 1:1-1:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{resch_et_al:LIPIcs.ITC.2023.1,
  author =	{Resch, Nicolas and Yuan, Chen},
  title =	{{Two-Round Perfectly Secure Message Transmission with Optimal Transmission Rate}},
  booktitle =	{4th Conference on Information-Theoretic Cryptography (ITC 2023)},
  pages =	{1:1--1:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-271-6},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{267},
  editor =	{Chung, Kai-Min},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITC.2023.1},
  URN =		{urn:nbn:de:0030-drops-183297},
  doi =		{10.4230/LIPIcs.ITC.2023.1},
  annote =	{Keywords: Secure transmission, Information theoretical secure, MDS codes}
}
Document
Track A: Algorithms, Complexity and Games
Zero-Rate Thresholds and New Capacity Bounds for List-Decoding and List-Recovery

Authors: Nicolas Resch, Chen Yuan, and Yihan Zhang

Published in: LIPIcs, Volume 261, 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)


Abstract
In this work we consider the list-decodability and list-recoverability of arbitrary q-ary codes, for all integer values of q ≥ 2. A code is called (p,L)_q-list-decodable if every radius pn Hamming ball contains less than L codewords; (p,𝓁,L)_q-list-recoverability is a generalization where we place radius pn Hamming balls on every point of a combinatorial rectangle with side length 𝓁 and again stipulate that there be less than L codewords. Our main contribution is to precisely calculate the maximum value of p for which there exist infinite families of positive rate (p,𝓁,L)_q-list-recoverable codes, the quantity we call the zero-rate threshold. Denoting this value by p_*, we in fact show that codes correcting a p_*+ε fraction of errors must have size O_ε(1), i.e., independent of n. Such a result is typically referred to as a "Plotkin bound." To complement this, a standard random code with expurgation construction shows that there exist positive rate codes correcting a p_*-ε fraction of errors. We also follow a classical proof template (typically attributed to Elias and Bassalygo) to derive from the zero-rate threshold other tradeoffs between rate and decoding radius for list-decoding and list-recovery. Technically, proving the Plotkin bound boils down to demonstrating the Schur convexity of a certain function defined on the q-simplex as well as the convexity of a univariate function derived from it. We remark that an earlier argument claimed similar results for q-ary list-decoding; however, we point out that this earlier proof is flawed.

Cite as

Nicolas Resch, Chen Yuan, and Yihan Zhang. Zero-Rate Thresholds and New Capacity Bounds for List-Decoding and List-Recovery. In 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 261, pp. 99:1-99:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{resch_et_al:LIPIcs.ICALP.2023.99,
  author =	{Resch, Nicolas and Yuan, Chen and Zhang, Yihan},
  title =	{{Zero-Rate Thresholds and New Capacity Bounds for List-Decoding and List-Recovery}},
  booktitle =	{50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)},
  pages =	{99:1--99:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-278-5},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{261},
  editor =	{Etessami, Kousha and Feige, Uriel and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2023.99},
  URN =		{urn:nbn:de:0030-drops-181518},
  doi =		{10.4230/LIPIcs.ICALP.2023.99},
  annote =	{Keywords: Coding theory, List-decoding, List-recovery, Zero-rate thresholds}
}
Document
Track A: Algorithms, Complexity and Games
Threshold Rates of Code Ensembles: Linear Is Best

Authors: Nicolas Resch and Chen Yuan

Published in: LIPIcs, Volume 229, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)


Abstract
In this work, we prove new results concerning the combinatorial properties of random linear codes. By applying the thresholds framework from Mosheiff et al. (FOCS 2020) we derive fine-grained results concerning the list-decodability and -recoverability of random linear codes. Firstly, we prove a lower bound on the list-size required for random linear codes over 𝔽_q ε-close to capacity to list-recover with error radius ρ and input lists of size 𝓁. We show that the list-size L must be at least {log_q binom{q,𝓁}}-R}/ε, where R is the rate of the random linear code. This is analogous to a lower bound for list-decoding that was recently obtained by Guruswami et al. (IEEE TIT 2021B). As a comparison, we also pin down the list size of random codes which is {log_q binom{q,𝓁}}/ε. This result almost closes the O({q log L}/L) gap left by Guruswami et al. (IEEE TIT 2021A). This leaves open the possibility (that we consider likely) that random linear codes perform better than the random codes for list-recoverability, which is in contrast to a recent gap shown for the case of list-recovery from erasures (Guruswami et al., IEEE TIT 2021B). Next, we consider list-decoding with constant list-sizes. Specifically, we obtain new lower bounds on the rate required for: - List-of-3 decodability of random linear codes over 𝔽₂; - List-of-2 decodability of random linear codes over 𝔽_q (for any q). This expands upon Guruswami et al. (IEEE TIT 2021A) which only studied list-of-2 decodability of random linear codes over 𝔽₂. Further, in both cases we are able to show that the rate is larger than that which is possible for uniformly random codes. A conclusion that we draw from our work is that, for many combinatorial properties of interest, random linear codes actually perform better than uniformly random codes, in contrast to the apparently standard intuition that uniformly random codes are best.

Cite as

Nicolas Resch and Chen Yuan. Threshold Rates of Code Ensembles: Linear Is Best. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 229, pp. 104:1-104:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{resch_et_al:LIPIcs.ICALP.2022.104,
  author =	{Resch, Nicolas and Yuan, Chen},
  title =	{{Threshold Rates of Code Ensembles: Linear Is Best}},
  booktitle =	{49th International Colloquium on Automata, Languages, and Programming (ICALP 2022)},
  pages =	{104:1--104:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-235-8},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{229},
  editor =	{Boja\'{n}czyk, Miko{\l}aj and Merelli, Emanuela and Woodruff, David P.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2022.104},
  URN =		{urn:nbn:de:0030-drops-164456},
  doi =		{10.4230/LIPIcs.ICALP.2022.104},
  annote =	{Keywords: Random Linear Codes, List-Decoding, List-Recovery, Threshold Rates}
}
Document
Sharp Threshold Rates for Random Codes

Authors: Venkatesan Guruswami, Jonathan Mosheiff, Nicolas Resch, Shashwat Silas, and Mary Wootters

Published in: LIPIcs, Volume 185, 12th Innovations in Theoretical Computer Science Conference (ITCS 2021)


Abstract
Suppose that 𝒫 is a property that may be satisfied by a random code C ⊂ Σⁿ. For example, for some p ∈ (0,1), 𝒫 might be the property that there exist three elements of C that lie in some Hamming ball of radius pn. We say that R^* is the threshold rate for 𝒫 if a random code of rate R^* + ε is very likely to satisfy 𝒫, while a random code of rate R^* - ε is very unlikely to satisfy 𝒫. While random codes are well-studied in coding theory, even the threshold rates for relatively simple properties like the one above are not well understood. We characterize threshold rates for a rich class of properties. These properties, like the example above, are defined by the inclusion of specific sets of codewords which are also suitably "symmetric." For properties in this class, we show that the threshold rate is in fact equal to the lower bound that a simple first-moment calculation obtains. Our techniques not only pin down the threshold rate for the property 𝒫 above, they give sharp bounds on the threshold rate for list-recovery in several parameter regimes, as well as an efficient algorithm for estimating the threshold rates for list-recovery in general.

Cite as

Venkatesan Guruswami, Jonathan Mosheiff, Nicolas Resch, Shashwat Silas, and Mary Wootters. Sharp Threshold Rates for Random Codes. In 12th Innovations in Theoretical Computer Science Conference (ITCS 2021). Leibniz International Proceedings in Informatics (LIPIcs), Volume 185, pp. 5:1-5:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.ITCS.2021.5,
  author =	{Guruswami, Venkatesan and Mosheiff, Jonathan and Resch, Nicolas and Silas, Shashwat and Wootters, Mary},
  title =	{{Sharp Threshold Rates for Random Codes}},
  booktitle =	{12th Innovations in Theoretical Computer Science Conference (ITCS 2021)},
  pages =	{5:1--5:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-177-1},
  ISSN =	{1868-8969},
  year =	{2021},
  volume =	{185},
  editor =	{Lee, James R.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2021.5},
  URN =		{urn:nbn:de:0030-drops-135446},
  doi =		{10.4230/LIPIcs.ITCS.2021.5},
  annote =	{Keywords: Coding theory, Random codes, Sharp thresholds}
}
Document
RANDOM
Bounds for List-Decoding and List-Recovery of Random Linear Codes

Authors: Venkatesan Guruswami, Ray Li, Jonathan Mosheiff, Nicolas Resch, Shashwat Silas, and Mary Wootters

Published in: LIPIcs, Volume 176, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020)


Abstract
A family of error-correcting codes is list-decodable from error fraction p if, for every code in the family, the number of codewords in any Hamming ball of fractional radius p is less than some integer L that is independent of the code length. It is said to be list-recoverable for input list size 𝓁 if for every sufficiently large subset of codewords (of size L or more), there is a coordinate where the codewords take more than 𝓁 values. The parameter L is said to be the "list size" in either case. The capacity, i.e., the largest possible rate for these notions as the list size L → ∞, is known to be 1-h_q(p) for list-decoding, and 1-log_q 𝓁 for list-recovery, where q is the alphabet size of the code family. In this work, we study the list size of random linear codes for both list-decoding and list-recovery as the rate approaches capacity. We show the following claims hold with high probability over the choice of the code (below q is the alphabet size, and ε > 0 is the gap to capacity). - A random linear code of rate 1 - log_q(𝓁) - ε requires list size L ≥ 𝓁^{Ω(1/ε)} for list-recovery from input list size 𝓁. This is surprisingly in contrast to completely random codes, where L = O(𝓁/ε) suffices w.h.p. - A random linear code of rate 1 - h_q(p) - ε requires list size L ≥ ⌊ {h_q(p)/ε+0.99}⌋ for list-decoding from error fraction p, when ε is sufficiently small. - A random binary linear code of rate 1 - h₂(p) - ε is list-decodable from average error fraction p with list size with L ≤ ⌊ {h₂(p)/ε}⌋ + 2. (The average error version measures the average Hamming distance of the codewords from the center of the Hamming ball, instead of the maximum distance as in list-decoding.) The second and third results together precisely pin down the list sizes for binary random linear codes for both list-decoding and average-radius list-decoding to three possible values. Our lower bounds follow by exhibiting an explicit subset of codewords so that this subset - or some symbol-wise permutation of it - lies in a random linear code with high probability. This uses a recent characterization of (Mosheiff, Resch, Ron-Zewi, Silas, Wootters, 2019) of configurations of codewords that are contained in random linear codes. Our upper bound follows from a refinement of the techniques of (Guruswami, Håstad, Sudan, Zuckerman, 2002) and strengthens a previous result of (Li, Wootters, 2018), which applied to list-decoding rather than average-radius list-decoding.

Cite as

Venkatesan Guruswami, Ray Li, Jonathan Mosheiff, Nicolas Resch, Shashwat Silas, and Mary Wootters. Bounds for List-Decoding and List-Recovery of Random Linear Codes. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 176, pp. 9:1-9:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.APPROX/RANDOM.2020.9,
  author =	{Guruswami, Venkatesan and Li, Ray and Mosheiff, Jonathan and Resch, Nicolas and Silas, Shashwat and Wootters, Mary},
  title =	{{Bounds for List-Decoding and List-Recovery of Random Linear Codes}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020)},
  pages =	{9:1--9:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-164-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{176},
  editor =	{Byrka, Jaros{\l}aw and Meka, Raghu},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX/RANDOM.2020.9},
  URN =		{urn:nbn:de:0030-drops-126126},
  doi =		{10.4230/LIPIcs.APPROX/RANDOM.2020.9},
  annote =	{Keywords: list-decoding, list-recovery, random linear codes, coding theory}
}
Document
RANDOM
On List Recovery of High-Rate Tensor Codes

Authors: Swastik Kopparty, Nicolas Resch, Noga Ron-Zewi, Shubhangi Saraf, and Shashwat Silas

Published in: LIPIcs, Volume 145, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019)


Abstract
We continue the study of list recovery properties of high-rate tensor codes, initiated by Hemenway, Ron-Zewi, and Wootters (FOCS'17). In that work it was shown that the tensor product of an efficient (poly-time) high-rate globally list recoverable code is approximately locally list recoverable, as well as globally list recoverable in probabilistic near-linear time. This was used in turn to give the first capacity-achieving list decodable codes with (1) local list decoding algorithms, and with (2) probabilistic near-linear time global list decoding algorithms. This also yielded constant-rate codes approaching the Gilbert-Varshamov bound with probabilistic near-linear time global unique decoding algorithms. In the current work we obtain the following results: 1) The tensor product of an efficient (poly-time) high-rate globally list recoverable code is globally list recoverable in deterministic near-linear time. This yields in turn the first capacity-achieving list decodable codes with deterministic near-linear time global list decoding algorithms. It also gives constant-rate codes approaching the Gilbert-Varshamov bound with deterministic near-linear time global unique decoding algorithms. 2) If the base code is additionally locally correctable, then the tensor product is (genuinely) locally list recoverable. This yields in turn (non-explicit) constant-rate codes approaching the Gilbert-Varshamov bound that are locally correctable with query complexity and running time N^{o(1)}. This improves over prior work by Gopi et. al. (SODA'17; IEEE Transactions on Information Theory'18) that only gave query complexity N^{epsilon} with rate that is exponentially small in 1/epsilon. 3) A nearly-tight combinatorial lower bound on output list size for list recovering high-rate tensor codes. This bound implies in turn a nearly-tight lower bound of N^{Omega(1/log log N)} on the product of query complexity and output list size for locally list recovering high-rate tensor codes.

Cite as

Swastik Kopparty, Nicolas Resch, Noga Ron-Zewi, Shubhangi Saraf, and Shashwat Silas. On List Recovery of High-Rate Tensor Codes. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 145, pp. 68:1-68:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{kopparty_et_al:LIPIcs.APPROX-RANDOM.2019.68,
  author =	{Kopparty, Swastik and Resch, Nicolas and Ron-Zewi, Noga and Saraf, Shubhangi and Silas, Shashwat},
  title =	{{On List Recovery of High-Rate Tensor Codes}},
  booktitle =	{Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019)},
  pages =	{68:1--68:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-125-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{145},
  editor =	{Achlioptas, Dimitris and V\'{e}gh, L\'{a}szl\'{o} A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2019.68},
  URN =		{urn:nbn:de:0030-drops-112832},
  doi =		{10.4230/LIPIcs.APPROX-RANDOM.2019.68},
  annote =	{Keywords: Coding theory, Tensor codes, List-decoding and recovery, Local codes}
}
Document
Lossless Dimension Expanders via Linearized Polynomials and Subspace Designs

Authors: Venkatesan Guruswami, Nicolas Resch, and Chaoping Xing

Published in: LIPIcs, Volume 102, 33rd Computational Complexity Conference (CCC 2018)


Abstract
For a vector space F^n over a field F, an (eta,beta)-dimension expander of degree d is a collection of d linear maps Gamma_j : F^n -> F^n such that for every subspace U of F^n of dimension at most eta n, the image of U under all the maps, sum_{j=1}^d Gamma_j(U), has dimension at least beta dim(U). Over a finite field, a random collection of d = O(1) maps Gamma_j offers excellent "lossless" expansion whp: beta ~~ d for eta >= Omega(1/d). When it comes to a family of explicit constructions (for growing n), however, achieving even modest expansion factor beta = 1+epsilon with constant degree is a non-trivial goal. We present an explicit construction of dimension expanders over finite fields based on linearized polynomials and subspace designs, drawing inspiration from recent progress on list-decoding in the rank-metric. Our approach yields the following: - Lossless expansion over large fields; more precisely beta >= (1-epsilon)d and eta >= (1-epsilon)/d with d = O_epsilon(1), when |F| >= Omega(n). - Optimal up to constant factors expansion over fields of arbitrarily small polynomial size; more precisely beta >= Omega(delta d) and eta >= Omega(1/(delta d)) with d=O_delta(1), when |F| >= n^{delta}. Previously, an approach reducing to monotone expanders (a form of vertex expansion that is highly non-trivial to establish) gave (Omega(1),1+Omega(1))-dimension expanders of constant degree over all fields. An approach based on "rank condensing via subspace designs" led to dimension expanders with beta >rsim sqrt{d} over large fields. Ours is the first construction to achieve lossless dimension expansion, or even expansion proportional to the degree.

Cite as

Venkatesan Guruswami, Nicolas Resch, and Chaoping Xing. Lossless Dimension Expanders via Linearized Polynomials and Subspace Designs. In 33rd Computational Complexity Conference (CCC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 102, pp. 4:1-4:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{guruswami_et_al:LIPIcs.CCC.2018.4,
  author =	{Guruswami, Venkatesan and Resch, Nicolas and Xing, Chaoping},
  title =	{{Lossless Dimension Expanders via Linearized Polynomials and Subspace Designs}},
  booktitle =	{33rd Computational Complexity Conference (CCC 2018)},
  pages =	{4:1--4:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-069-9},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{102},
  editor =	{Servedio, Rocco A.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2018.4},
  URN =		{urn:nbn:de:0030-drops-88859},
  doi =		{10.4230/LIPIcs.CCC.2018.4},
  annote =	{Keywords: Algebraic constructions, coding theory, linear algebra, list-decoding, polynomial method, pseudorandomness}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail