Complexity lower bounds like P != NP assert impossibility results for all possible programs of some restricted form. As there are presently enormous gaps in our lower bound knowledge, a central question on the minds of today's complexity theorists is how will we find better ways to reason about all efficient programs? I argue that some progress can be made by (very deliberately) thinking algorithmically about lower bounds. Slightly more precisely, to prove a lower bound against some class C of programs, we can start by treating C as a set of inputs to another (larger) process, which is intended to perform some basic analysis of programs in C. By carefully studying the algorithmic "meta-analysis" of programs in C, we can learn more about the limitations of the programs being analyzed. This essay is mostly self-contained; scant knowledge is assumed of the reader.
@InProceedings{williams:LIPIcs.CSL.2015.14, author = {Williams, R. Ryan}, title = {{Thinking Algorithmically About Impossibility}}, booktitle = {24th EACSL Annual Conference on Computer Science Logic (CSL 2015)}, pages = {14--23}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-939897-90-3}, ISSN = {1868-8969}, year = {2015}, volume = {41}, editor = {Kreutzer, Stephan}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CSL.2015.14}, URN = {urn:nbn:de:0030-drops-54396}, doi = {10.4230/LIPIcs.CSL.2015.14}, annote = {Keywords: satisfiability, derandomization, circuit complexity} }
Feedback for Dagstuhl Publishing