Search Results

Documents authored by Naor, Moni


Document
From Donkeys to Kings in Tournaments

Authors: Amir Abboud, Tomer Grossman, Moni Naor, and Tomer Solomon

Published in: LIPIcs, Volume 308, 32nd Annual European Symposium on Algorithms (ESA 2024)


Abstract
A tournament is an orientation of a complete graph. A vertex that can reach every other vertex within two steps is called a king. We study the complexity of finding k kings in a tournament graph. We show that the randomized query complexity of finding k ≤ 3 kings is O(n), and for the deterministic case it takes the same amount of queries (up to a constant) as finding a single king (the best known deterministic algorithm makes O(n^{3/2}) queries). On the other hand, we show that finding k ≥ 4 kings requires Ω(n²) queries, even in the randomized case. We consider the RAM model for k ≥ 4. We show an algorithm that finds k kings in time O(kn²), which is optimal for constant values of k. Alternatively, one can also find k ≥ 4 kings in time n^{ω} (the time for matrix multiplication). We provide evidence that this is optimal for large k by suggesting a fine-grained reduction from a variant of the triangle detection problem.

Cite as

Amir Abboud, Tomer Grossman, Moni Naor, and Tomer Solomon. From Donkeys to Kings in Tournaments. In 32nd Annual European Symposium on Algorithms (ESA 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 308, pp. 3:1-3:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{abboud_et_al:LIPIcs.ESA.2024.3,
  author =	{Abboud, Amir and Grossman, Tomer and Naor, Moni and Solomon, Tomer},
  title =	{{From Donkeys to Kings in Tournaments}},
  booktitle =	{32nd Annual European Symposium on Algorithms (ESA 2024)},
  pages =	{3:1--3:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-338-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{308},
  editor =	{Chan, Timothy and Fischer, Johannes and Iacono, John and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2024.3},
  URN =		{urn:nbn:de:0030-drops-210740},
  doi =		{10.4230/LIPIcs.ESA.2024.3},
  annote =	{Keywords: Tournament Graphs, Kings, Query Complexity, Fine Grained Complexity}
}
Document
Are Your Keys Protected? Time Will Tell

Authors: Yoav Ben Dov, Liron David, Moni Naor, and Elad Tzalik

Published in: LIPIcs, Volume 304, 5th Conference on Information-Theoretic Cryptography (ITC 2024)


Abstract
Side channel attacks, and in particular timing attacks, are a fundamental obstacle to obtaining secure implementation of algorithms and cryptographic protocols, and have been widely researched for decades. While cryptographic definitions for the security of cryptographic systems have been well established for decades, none of these accepted definitions take into account the running time information leaked from executing the system. In this work, we give the foundation of new cryptographic definitions for cryptographic systems that take into account information about their leaked running time, focusing mainly on keyed functions such as signature and encryption schemes. Specifically, [(1)] 1) We define several cryptographic properties to express the claim that the timing information does not help an adversary to extract sensitive information, e.g. the key or the queries made. We highlight the definition of key-obliviousness, which means that an adversary cannot tell whether it received the timing of the queries with the actual key or the timing of the same queries with a random key. 2) We present a construction of key-oblivious pseudorandom permutations on a small or medium-sized domain. This construction is not "fixed-time," and at the same time is secure against any number of queries even in case the adversary knows the running time exactly. Our construction, which we call Janus Sometimes Recurse, is a variant of the "Sometimes Recurse" shuffle by Morris and Rogaway. 3) We suggest a new security notion for keyed functions, called noticeable security, and prove that cryptographic schemes that have noticeable security remain secure even when the exact timings are leaked, provided the implementation is key-oblivious. We show that our notion applies to cryptographic signatures, private key encryption and PRPs.

Cite as

Yoav Ben Dov, Liron David, Moni Naor, and Elad Tzalik. Are Your Keys Protected? Time Will Tell. In 5th Conference on Information-Theoretic Cryptography (ITC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 304, pp. 3:1-3:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bendov_et_al:LIPIcs.ITC.2024.3,
  author =	{Ben Dov, Yoav and David, Liron and Naor, Moni and Tzalik, Elad},
  title =	{{Are Your Keys Protected? Time Will Tell}},
  booktitle =	{5th Conference on Information-Theoretic Cryptography (ITC 2024)},
  pages =	{3:1--3:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-333-1},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{304},
  editor =	{Aggarwal, Divesh},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITC.2024.3},
  URN =		{urn:nbn:de:0030-drops-205119},
  doi =		{10.4230/LIPIcs.ITC.2024.3},
  annote =	{Keywords: Side channel attacks, Timing attacks, Keyed functions, Key oblivious, Noticeable security}
}
Document
New Algorithms and Applications for Risk-Limiting Audits

Authors: Bar Karov and Moni Naor

Published in: LIPIcs, Volume 256, 4th Symposium on Foundations of Responsible Computing (FORC 2023)


Abstract
Risk-limiting audits (RLAs) are a significant tool in increasing confidence in the accuracy of elections. They consist of randomized algorithms which check that an election’s vote tally, as reported by a vote tabulation system, corresponds to the correct candidates winning. If an initial vote count leads to the wrong election winner, an RLA guarantees to identify the error with high probability over its own randomness. These audits operate by sequentially sampling and examining ballots until they can either confirm the reported winner or identify the true winner. The first part of this work suggests a new generic method, called "Batchcomp", for converting classical (ballot-level) RLAs into ones that operate on batches. As a concrete application of the suggested method, we develop the first RLA for the Israeli Knesset elections, and convert it to one which operates on batches using "Batchcomp". We ran this suggested method on the real results of recent Knesset elections. The second part of this work suggests a new use-case for RLAs: verifying that a population census leads to the correct allocation of parliament seats to a nation’s federal-states. We present an adaptation of ALPHA [Stark, 2023], an existing RLA method, to a method which applies to censuses. This suggested census RLA relies on data from both the census and from an additional procedure which is already conducted in many countries today, called a post-enumeration survey.

Cite as

Bar Karov and Moni Naor. New Algorithms and Applications for Risk-Limiting Audits. In 4th Symposium on Foundations of Responsible Computing (FORC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 256, pp. 2:1-2:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{karov_et_al:LIPIcs.FORC.2023.2,
  author =	{Karov, Bar and Naor, Moni},
  title =	{{New Algorithms and Applications for Risk-Limiting Audits}},
  booktitle =	{4th Symposium on Foundations of Responsible Computing (FORC 2023)},
  pages =	{2:1--2:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-272-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{256},
  editor =	{Talwar, Kunal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2023.2},
  URN =		{urn:nbn:de:0030-drops-179232},
  doi =		{10.4230/LIPIcs.FORC.2023.2},
  annote =	{Keywords: Risk-Limiting Audit, RLA, Batch-Level RLA, Census}
}
Document
Resistance to Timing Attacks for Sampling and Privacy Preserving Schemes

Authors: Yoav Ben Dov, Liron David, Moni Naor, and Elad Tzalik

Published in: LIPIcs, Volume 256, 4th Symposium on Foundations of Responsible Computing (FORC 2023)


Abstract
Side channel attacks, and in particular timing attacks, are a fundamental obstacle for secure implementation of algorithms and cryptographic protocols. These attacks and countermeasures have been widely researched for decades. We offer a new perspective on resistance to timing attacks. We focus on sampling algorithms and their application to differential privacy. We define sampling algorithms that do not reveal information about the sampled output through their running time. More specifically: (1) We characterize the distributions that can be sampled from in a "time oblivious" way, meaning that the running time does not leak any information about the output. We provide an optimal algorithm in terms of randomness used to sample for these distributions. We give an example of an efficient randomized algorithm 𝒜 such that there is no subexponential algorithm with the same output as 𝒜 that does not reveal information on the output or the input, therefore we show leaking information on either the input or the output is unavoidable. (2) We consider the impact of timing attacks on (pure) differential privacy mechanisms. It turns out that if the range of the mechanism is unbounded, such as counting, then any time oblivious pure DP mechanism must give a useless output with constant probability (the constant is mechanism dependent) and must have infinite expected running time. We show that up to this limitations it is possible to transform any pure DP mechanism into a time oblivious one.

Cite as

Yoav Ben Dov, Liron David, Moni Naor, and Elad Tzalik. Resistance to Timing Attacks for Sampling and Privacy Preserving Schemes. In 4th Symposium on Foundations of Responsible Computing (FORC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 256, pp. 11:1-11:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{bendov_et_al:LIPIcs.FORC.2023.11,
  author =	{Ben Dov, Yoav and David, Liron and Naor, Moni and Tzalik, Elad},
  title =	{{Resistance to Timing Attacks for Sampling and Privacy Preserving Schemes}},
  booktitle =	{4th Symposium on Foundations of Responsible Computing (FORC 2023)},
  pages =	{11:1--11:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-272-3},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{256},
  editor =	{Talwar, Kunal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2023.11},
  URN =		{urn:nbn:de:0030-drops-179329},
  doi =		{10.4230/LIPIcs.FORC.2023.11},
  annote =	{Keywords: Differential Privacy}
}
Document
Mirror Games Against an Open Book Player

Authors: Roey Magen and Moni Naor

Published in: LIPIcs, Volume 226, 11th International Conference on Fun with Algorithms (FUN 2022)


Abstract
Mirror games were invented by Garg and Schnieder (ITCS 2019). Alice and Bob take turns (with Alice playing first) in declaring numbers from the set {1,2, …, 2n}. If a player picks a number that was previously played, that player loses and the other player wins. If all numbers are declared without repetition, the result is a draw. Bob has a simple mirror strategy that assures he won't lose and requires no memory. On the other hand, Garg and Schenier showed that every deterministic Alice needs memory of size linear in n in order to secure a draw. Regarding probabilistic strategies, previous work showed that a model where Alice has access to a secret random perfect matching over {1,2, …, 2n} allows her to achieve a draw in the game w.p. a least 1-1/n and using only polylog bits of memory. We show that the requirement for secret bits is crucial: for an "open book" Alice with no secrets (Bob knows her memory but not future coin flips) and memory of at most n/4c bits for any c ≥ 2, there is a Bob that wins w.p. close to 1-{2^{-c/2}}.

Cite as

Roey Magen and Moni Naor. Mirror Games Against an Open Book Player. In 11th International Conference on Fun with Algorithms (FUN 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 226, pp. 20:1-20:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{magen_et_al:LIPIcs.FUN.2022.20,
  author =	{Magen, Roey and Naor, Moni},
  title =	{{Mirror Games Against an Open Book Player}},
  booktitle =	{11th International Conference on Fun with Algorithms (FUN 2022)},
  pages =	{20:1--20:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-232-7},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{226},
  editor =	{Fraigniaud, Pierre and Uno, Yushi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FUN.2022.20},
  URN =		{urn:nbn:de:0030-drops-159900},
  doi =		{10.4230/LIPIcs.FUN.2022.20},
  annote =	{Keywords: Mirror Games, Space Complexity, Eventown-Oddtown}
}
Document
Keep That Card in Mind: Card Guessing with Limited Memory

Authors: Boaz Menuhin and Moni Naor

Published in: LIPIcs, Volume 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022)


Abstract
A card guessing game is played between two players, Guesser and Dealer. At the beginning of the game, the Dealer holds a deck of n cards (labeled 1, ..., n). For n turns, the Dealer draws a card from the deck, the Guesser guesses which card was drawn, and then the card is discarded from the deck. The Guesser receives a point for each correctly guessed card. With perfect memory, a Guesser can keep track of all cards that were played so far and pick at random a card that has not appeared so far, yielding in expectation ln n correct guesses, regardless of how the Dealer arranges the deck. With no memory, the best a Guesser can do will result in a single guess in expectation. We consider the case of a memory bounded Guesser that has m < n memory bits. We show that the performance of such a memory bounded Guesser depends much on the behavior of the Dealer. In more detail, we show that there is a gap between the static case, where the Dealer draws cards from a properly shuffled deck or a prearranged one, and the adaptive case, where the Dealer draws cards thoughtfully, in an adversarial manner. Specifically: 1) We show a Guesser with O(log² n) memory bits that scores a near optimal result against any static Dealer. 2) We show that no Guesser with m bits of memory can score better than O(√m) correct guesses against a random Dealer, thus, no Guesser can score better than min {√m, ln n}, i.e., the above Guesser is optimal. 3) We show an efficient adaptive Dealer against which no Guesser with m memory bits can make more than ln m + 2 ln log n + O(1) correct guesses in expectation. These results are (almost) tight, and we prove them using compression arguments that harness the guessing strategy for encoding.

Cite as

Boaz Menuhin and Moni Naor. Keep That Card in Mind: Card Guessing with Limited Memory. In 13th Innovations in Theoretical Computer Science Conference (ITCS 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 215, pp. 107:1-107:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{menuhin_et_al:LIPIcs.ITCS.2022.107,
  author =	{Menuhin, Boaz and Naor, Moni},
  title =	{{Keep That Card in Mind: Card Guessing with Limited Memory}},
  booktitle =	{13th Innovations in Theoretical Computer Science Conference (ITCS 2022)},
  pages =	{107:1--107:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-217-4},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{215},
  editor =	{Braverman, Mark},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2022.107},
  URN =		{urn:nbn:de:0030-drops-157039},
  doi =		{10.4230/LIPIcs.ITCS.2022.107},
  annote =	{Keywords: Adaptivity vs Non-adaptivity, Adversarial Robustness, Card Guessing, Compression Argument, Information Theory, Streaming Algorithms, Two Player Game}
}
Document
Out-Of-Band Authenticated Group Key Exchange: From Strong Authentication to Immediate Key Delivery

Authors: Moni Naor, Lior Rotem, and Gil Segev

Published in: LIPIcs, Volume 163, 1st Conference on Information-Theoretic Cryptography (ITC 2020)


Abstract
Given the inherent ad-hoc nature of popular communication platforms, out-of-band authenticated key-exchange protocols are becoming widely deployed: Key exchange protocols that enable users to detect man-in-the-middle attacks by manually authenticating one short value. In this work we put forward the notion of immediate key delivery for such protocols, requiring that even if some users participate in the protocol but do not complete it (e.g., due to losing data connectivity or to other common synchronicity issues), then the remaining users should still agree on a shared secret. A property of a similar flavor was introduced by Alwen, Coretti and Dodis (EUROCRYPT '19) asking for immediate decryption of messages in user-to-user messaging while assuming that a shared secret has already been established - but the underlying issue is crucial already during the initial key exchange and goes far beyond the context of messaging. Equipped with our immediate key delivery property, we formalize strong notions of security for out-of-band authenticated group key exchange, and demonstrate that the existing protocols either do not satisfy our notions of security or are impractical (these include, in particular, the protocols deployed by Telegram, Signal and WhatsApp). Then, based on the existence of any passively-secure key-exchange protocol (e.g., the Diffie-Hellman protocol), we construct an out-of-band authenticated group key-exchange protocol satisfying our notions of security. Our protocol is inspired by techniques that have been developed in the context of fair string sampling in order to minimize the effect of adversarial aborts, and offers the optimal tradeoff between the length of its out-of-band value and its security.

Cite as

Moni Naor, Lior Rotem, and Gil Segev. Out-Of-Band Authenticated Group Key Exchange: From Strong Authentication to Immediate Key Delivery. In 1st Conference on Information-Theoretic Cryptography (ITC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 163, pp. 9:1-9:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{naor_et_al:LIPIcs.ITC.2020.9,
  author =	{Naor, Moni and Rotem, Lior and Segev, Gil},
  title =	{{Out-Of-Band Authenticated Group Key Exchange: From Strong Authentication to Immediate Key Delivery}},
  booktitle =	{1st Conference on Information-Theoretic Cryptography (ITC 2020)},
  pages =	{9:1--9:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-151-1},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{163},
  editor =	{Tauman Kalai, Yael and Smith, Adam D. and Wichs, Daniel},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITC.2020.9},
  URN =		{urn:nbn:de:0030-drops-121146},
  doi =		{10.4230/LIPIcs.ITC.2020.9},
  annote =	{Keywords: End-to-end encryption, out-of-band authentication, key exchange}
}
Document
Can Two Walk Together: Privacy Enhancing Methods and Preventing Tracking of Users

Authors: Moni Naor and Neil Vexler

Published in: LIPIcs, Volume 156, 1st Symposium on Foundations of Responsible Computing (FORC 2020)


Abstract
We present a new concern when collecting data from individuals that arises from the attempt to mitigate privacy leakage in multiple reporting: tracking of users participating in the data collection via the mechanisms added to provide privacy. We present several definitions for untrackable mechanisms, inspired by the differential privacy framework. Specifically, we define the trackable parameter as the log of the maximum ratio between the probability that a set of reports originated from a single user and the probability that the same set of reports originated from two users (with the same private value). We explore the implications of this new definition. We show how differentially private and untrackable mechanisms can be combined to achieve a bound for the problem of detecting when a certain user changed their private value. Examining Google’s deployed solution for everlasting privacy, we show that RAPPOR (Erlingsson et al. ACM CCS, 2014) is trackable in our framework for the parameters presented in their paper. We analyze a variant of randomized response for collecting statistics of single bits, Bitwise Everlasting Privacy, that achieves good accuracy and everlasting privacy, while only being reasonably untrackable, specifically grows linearly in the number of reports. For collecting statistics about data from larger domains (for histograms and heavy hitters) we present a mechanism that prevents tracking for a limited number of responses. We also present the concept of Mechanism Chaining, using the output of one mechanism as the input of another, in the scope of Differential Privacy, and show that the chaining of an ε₁-LDP mechanism with an ε₂-LDP mechanism is ln (e^{ε₁+ε₂} + 1)/(e^ε₁ + e^ε₂)-LDP and that this bound is tight.

Cite as

Moni Naor and Neil Vexler. Can Two Walk Together: Privacy Enhancing Methods and Preventing Tracking of Users. In 1st Symposium on Foundations of Responsible Computing (FORC 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 156, pp. 4:1-4:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{naor_et_al:LIPIcs.FORC.2020.4,
  author =	{Naor, Moni and Vexler, Neil},
  title =	{{Can Two Walk Together: Privacy Enhancing Methods and Preventing Tracking of Users}},
  booktitle =	{1st Symposium on Foundations of Responsible Computing (FORC 2020)},
  pages =	{4:1--4:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-142-9},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{156},
  editor =	{Roth, Aaron},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FORC.2020.4},
  URN =		{urn:nbn:de:0030-drops-120205},
  doi =		{10.4230/LIPIcs.FORC.2020.4},
  annote =	{Keywords: Differential Privacy, Surveillance}
}
Document
Instance Complexity and Unlabeled Certificates in the Decision Tree Model

Authors: Tomer Grossman, Ilan Komargodski, and Moni Naor

Published in: LIPIcs, Volume 151, 11th Innovations in Theoretical Computer Science Conference (ITCS 2020)


Abstract
Instance complexity is a measure of goodness of an algorithm in which the performance of one algorithm is compared to others per input. This is in sharp contrast to worst-case and average-case complexity measures, where the performance is compared either on the worst input or on an average one, respectively. We initiate the systematic study of instance complexity and optimality in the query model (a.k.a. the decision tree model). In this model, instance optimality of an algorithm for computing a function is the requirement that the complexity of an algorithm on any input is at most a constant factor larger than the complexity of the best correct algorithm. That is we compare the decision tree to one that receives a certificate and its complexity is measured only if the certificate is correct (but correctness should hold on any input). We study both deterministic and randomized decision trees and provide various characterizations and barriers for more general results. We introduce a new measure of complexity called unlabeled-certificate complexity, appropriate for graph properties and other functions with symmetries, where only information about the structure of the graph is known to the competing algorithm. More precisely, the certificate is some permutation of the input (rather than the input itself) and the correctness should be maintained even if the certificate is wrong. First we show that such an unlabeled certificate is sometimes very helpful in the worst-case. We then study instance optimality with respect to this measure of complexity, where an algorithm is said to be instance optimal if for every input it performs roughly as well as the best algorithm that is given an unlabeled certificate (but is correct on every input). We show that instance optimality depends on the group of permutations in consideration. Our proofs rely on techniques from hypothesis testing and analysis of random graphs.

Cite as

Tomer Grossman, Ilan Komargodski, and Moni Naor. Instance Complexity and Unlabeled Certificates in the Decision Tree Model. In 11th Innovations in Theoretical Computer Science Conference (ITCS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 151, pp. 56:1-56:38, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{grossman_et_al:LIPIcs.ITCS.2020.56,
  author =	{Grossman, Tomer and Komargodski, Ilan and Naor, Moni},
  title =	{{Instance Complexity and Unlabeled Certificates in the Decision Tree Model}},
  booktitle =	{11th Innovations in Theoretical Computer Science Conference (ITCS 2020)},
  pages =	{56:1--56:38},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-134-4},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{151},
  editor =	{Vidick, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2020.56},
  URN =		{urn:nbn:de:0030-drops-117418},
  doi =		{10.4230/LIPIcs.ITCS.2020.56},
  annote =	{Keywords: decision tree complexity, instance complexity, instance optimality, query complexity, unlabeled certificates}
}
Document
The Journey from NP to TFNP Hardness

Authors: Pavel Hubácek, Moni Naor, and Eylon Yogev

Published in: LIPIcs, Volume 67, 8th Innovations in Theoretical Computer Science Conference (ITCS 2017)


Abstract
The class TFNP is the search analog of NP with the additional guarantee that any instance has a solution. TFNP has attracted extensive attention due to its natural syntactic subclasses that capture the computational complexity of important search problems from algorithmic game theory, combinatorial optimization and computational topology. Thus, one of the main research objectives in the context of TFNP is to search for efficient algorithms for its subclasses, and at the same time proving hardness results where efficient algorithms cannot exist. Currently, no problem in TFNP is known to be hard under assumptions such as NP hardness, the existence of one-way functions, or even public-key cryptography. The only known hardness results are based on less general assumptions such as the existence of collision-resistant hash functions, one-way permutations less established cryptographic primitives (e.g. program obfuscation or functional encryption). Several works explained this status by showing various barriers to proving hardness of TFNP. In particular, it has been shown that hardness of TFNP hardness cannot be based on worst-case NP hardness, unless NP=coNP. Therefore, we ask the following question: What is the weakest assumption sufficient for showing hardness in TFNP? In this work, we answer this question and show that hard-on-average TFNP problems can be based on the weak assumption that there exists a hard-on-average language in NP. In particular, this includes the assumption of the existence of one-way functions. In terms of techniques, we show an interesting interplay between problems in TFNP, derandomization techniques, and zero-knowledge proofs.

Cite as

Pavel Hubácek, Moni Naor, and Eylon Yogev. The Journey from NP to TFNP Hardness. In 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 67, pp. 60:1-60:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{hubacek_et_al:LIPIcs.ITCS.2017.60,
  author =	{Hub\'{a}cek, Pavel and Naor, Moni and Yogev, Eylon},
  title =	{{The Journey from NP to TFNP Hardness}},
  booktitle =	{8th Innovations in Theoretical Computer Science Conference (ITCS 2017)},
  pages =	{60:1--60:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-029-3},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{67},
  editor =	{Papadimitriou, Christos H.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2017.60},
  URN =		{urn:nbn:de:0030-drops-81627},
  doi =		{10.4230/LIPIcs.ITCS.2017.60},
  annote =	{Keywords: TFNP, derandomization, one-way functions, average-case hardness}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail