eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz International Proceedings in Informatics
1868-8969
2021-07-02
30:1
30:16
10.4230/LIPIcs.ICALP.2021.30
article
Learning Stochastic Decision Trees
Blanc, Guy
1
Lange, Jane
2
Tan, Li-Yang
1
Stanford University, CA, USA
MIT, Cambridge, MA, USA
We give a quasipolynomial-time algorithm for learning stochastic decision trees that is optimally resilient to adversarial noise. Given an η-corrupted set of uniform random samples labeled by a size-s stochastic decision tree, our algorithm runs in time n^{O(log(s/ε)/ε²)} and returns a hypothesis with error within an additive 2η + ε of the Bayes optimal. An additive 2η is the information-theoretic minimum.
Previously no non-trivial algorithm with a guarantee of O(η) + ε was known, even for weaker noise models. Our algorithm is furthermore proper, returning a hypothesis that is itself a decision tree; previously no such algorithm was known even in the noiseless setting.
https://drops.dagstuhl.de/storage/00lipics/lipics-vol198-icalp2021/LIPIcs.ICALP.2021.30/LIPIcs.ICALP.2021.30.pdf
Learning theory
decision trees
proper learning algorithms
adversarial noise