A fundamental question in computational complexity asks whether probabilistic polynomial-time algorithms can be simulated deterministically with a small overhead in time (the BPP vs. P problem). A corresponding question in the realm of interactive proofs asks whether Arthur-Merlin protocols can be simulated nondeterministically with a small overhead in time (the AM vs. NP problem). Both questions are intricately tied to lower bounds. Prominently, in both settings blackbox derandomization, i.e., derandomization through pseudo-random generators, has been shown equivalent to lower bounds for decision problems against circuits. Recently, Chen and Tell (FOCS'21) established near-equivalences in the BPP setting between whitebox derandomization and lower bounds for multi-bit functions against algorithms on almost-all inputs. The key ingredient is a technique to translate hardness into targeted hitting sets in an instance-wise fashion based on a layered arithmetization of the evaluation of a uniform circuit computing the hard function f on the given instance. In this paper we develop a corresponding technique for Arthur-Merlin protocols and establish similar near-equivalences in the AM setting. As an example of our results in the hardness to derandomization direction, consider a length-preserving function f computable by a nondeterministic algorithm that runs in time n^a. We show that if every Arthur-Merlin protocol that runs in time n^c for c = O(log² a) can only compute f correctly on finitely many inputs, then AM is in NP. Our main technical contribution is the construction of suitable targeted hitting-set generators based on probabilistically checkable proofs for nondeterministic computations. As a byproduct of our constructions, we obtain the first result indicating that whitebox derandomization of AM may be equivalent to the existence of targeted hitting-set generators for AM, an issue raised by Goldreich (LNCS, 2011). Byproducts in the average-case setting include the first uniform hardness vs. randomness tradeoffs for AM, as well as an unconditional mild derandomization result for AM.
@InProceedings{vanmelkebeek_et_al:LIPIcs.CCC.2023.17, author = {van Melkebeek, Dieter and Mocelin Sdroievski, Nicollas}, title = {{Instance-Wise Hardness Versus Randomness Tradeoffs for Arthur-Merlin Protocols}}, booktitle = {38th Computational Complexity Conference (CCC 2023)}, pages = {17:1--17:36}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-282-2}, ISSN = {1868-8969}, year = {2023}, volume = {264}, editor = {Ta-Shma, Amnon}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2023.17}, URN = {urn:nbn:de:0030-drops-182870}, doi = {10.4230/LIPIcs.CCC.2023.17}, annote = {Keywords: Hardness versus randomness tradeoff, Arthur-Merlin protocol, targeted hitting set generator} }
Feedback for Dagstuhl Publishing