Instance-Wise Hardness and Refutation versus Derandomization for Arthur-Merlin Protocols

A fundamental question in computational complexity asks whether probabilistic polynomial-time algorithms can be simulated deterministically with a small overhead in time (the BPP vs. P problem). A corresponding question in the realm of interactive proofs asks whether Arthur-Merlin protocols can be s...

Full description

Saved in:
Bibliographic Details
Published inComputational complexity Vol. 34; no. 2; p. 15
Main Authors van Melkebeek, Dieter, Mocelin Sdroievski, Nicollas
Format Journal Article
LanguageEnglish
Published Cham Springer International Publishing 01.12.2025
Springer Nature B.V
Subjects
Online AccessGet full text
ISSN1016-3328
1420-8954
1420-8954
DOI10.1007/s00037-025-00279-2

Cover

More Information
Summary:A fundamental question in computational complexity asks whether probabilistic polynomial-time algorithms can be simulated deterministically with a small overhead in time (the BPP vs. P problem). A corresponding question in the realm of interactive proofs asks whether Arthur-Merlin protocols can be simulated nondeterministically with a small overhead in time (the AM vs. NP problem). Both questions are intricately tied to lower bounds. Prominently, in both settings blackbox derandomization, i.e., derandomization through pseudorandom generators, has been shown equivalent to lower bounds for decision problems against circuits. Recently, Chen and Tell (FOCS'21) established nearequivalences in the BPP setting between whitebox derandomization and lower bounds for multi-bit functions against algorithms on almost-all inputs. The key ingredient is a technique to translate hardness into targeted hitting sets in an instance-wise fashion based on a layered arithmetization of the evaluation of a uniform circuit computing the hard function f on the given instance. Follow-up works managed to obtain full equivalences in the BPP setting by exploiting a compression property of classical pseudorandom generator constructions. In particular, Chen, Tell, and Williams (FOCS'23) showed that derandomization of BPP is equivalent to constructive lower bounds against algorithms that go through a compression phase. In this paper, we develop a corresponding technique for Arthur-Merlin protocols and establish similar near-equivalences in the AM setting. As an example of our results in the hardness-to-derandomization direction, consider a length-preserving function f computable by a nondeterministic algorithm that runs in time n a . We show that if every Arthur-Merlin protocol that runs in time n c for c = O ( log 2 a ) can only compute f correctly on finitely many inputs, then AM is in NP. We also obtain equivalences between constructive lower bounds against Arthur-Merlin protocols that go through a compression phase and derandomization of AM via targeted generators. Our main technical contribution is the construction of suitable targeted hitting-set generators based on probabilistically checkable proofs of proximity for nondeterministic computations. As a by-product of our constructions, we obtain the first result indicating that whitebox derandomization of AM may be equivalent to the existence of targeted hitting-set generators for AM, an issue raised by Goldreich (LNCS, 2011). By-products in the average-case setting include the first uniform hardness vs. randomness trade-offs for AM, as well as an unconditional mild derandomization result for AM.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1016-3328
1420-8954
1420-8954
DOI:10.1007/s00037-025-00279-2