The Power of Self-Directed Learning

Sally A. Goldman, Washington University in St Louis
Robert H. Sloan, University of Illinois at Chicago

Abstract

This paper studies self-directed learning, a variant of the on-line (or incremental) learning model in which the learner selects the presentation order for the instances. Alternatively, one can view this model as a variation of learning with membership queries in which the learner is only "charged" for menbership queries for which it could not predict the outcome. We give tight bounds on the complexity of self-directed learning for the concept clases of monomials, montone DNF formulas, and axis-parallel rectangles in {0,1 ' ' ', n-1}d. These results demonstrate that the number of mistakes under self-directed learning can be surprisingly small. We then show that learning complexity of self-directed learning and the Vapnik-Chervonenkis dimension. We show that in general, the VC-Dimension and the self-directed learning complexity are incomparable. However, for some special case we show that the VC-Dimension gives a lower bound for the self-directed learning complexity. Finally, we explore a relationship between Mitchell's version space algorithm and the existence of self-directed learning algorithms that make few mistakes.