Unlabeled Data Does Provably Help
A fully supervised learner needs access to correctly labeled examples whereas a semi-supervised learner has access to examples part of which are labeled and part of which are not. The hope is that a large collection of unlabeled examples significantly reduces the need for labeled-ones. It is widely believed that this reduction of "label complexity" is marginal unless the hidden target concept and the domain distribution satisfy some "compatibility assumptions". There are some recent papers in support of this belief. In this paper, we revitalize the discussion by presenting a result that goes in the other direction. To this end, we consider the PAC-learning model in two settings: the (classical) fully supervised setting and the semi-supervised setting. We show that the "label-complexity gap"' between the semi-supervised and the fully supervised setting can become arbitrarily large for concept classes of infinite VC-dimension (or sequences of classes whose VC-dimensions are finite but become arbitrarily large). On the other hand, this gap is bounded by O(ln |C|) for each finite concept class C that contains the constant zero- and the constant one-function. A similar statement holds for all classes C of finite VC-dimension.
algorithmic learning
sample complexity
semi-supervised learning
185-196
Regular Paper
Malte
Darnstädt
Malte Darnstädt
Hans Ulrich
Simon
Hans Ulrich Simon
Balázs
Szörényi
Balázs Szörényi
10.4230/LIPIcs.STACS.2013.185
Creative Commons Attribution-NoDerivs 3.0 Unported license
https://creativecommons.org/licenses/by-nd/3.0/legalcode