LIPIcs.STACS.2019.4.pdf
- Filesize: 393 kB
- 7 pages
Suppose the fastest algorithm that we can design for some problem runs in time O(n^2). However, we want to solve the problem on big data inputs, for which quadratic time is impractically slow. We can keep searching for a faster algorithm, but maybe none exists. Is there any reasoning that provides evidence against significantly faster algorithms, and thus allows us to stop searching? In other words, is there an analogue of NP-hardness for polynomial-time problems? In this tutorial, we will give an introduction to fine-grained complexity theory, which allows to rule out faster algorithms by proving conditional lower bounds via fine-grained reductions from certain key conjectures. We will define these terms and show exemplary lower bounds.
Feedback for Dagstuhl Publishing