Document Type

Technical Report

Publication Date

1992-11-12

Filename

WUCS-92-49.pdf

DOI:

10.7936/K7GQ6W2R

Technical Report Number

WUCS-92-49

Abstract

This paper studies self-directed learning, a variant of the on-line (or incremental) learning model in which the learner selects the presentation order for the instances. Alternatively, one can view this model as a variation of learning with membership queries in which the learner is only "charged" for membership queries for which it could not predict the outcome. We give tight bounds on the complexity of self-directed learning for the concept classes of monomials, monotone DNF formulas, and axis-parallel rectangles in {0,1 ' ' ', n-1}d. These results demonstrate that the number of mistakes under self-directed learning can be surprisingly small. We then show that learning complexity of self-directed learning and the Vapnik-Chervonenkis dimension. We show that in general, the VC-Dimension and the self-directed learning complexity are incomparable. However, for some special case we show that the VC-Dimension gives a lower bound for the self-directed learning complexity. Finally, we explore a relationship between Mitchell's version space algorithm and the existence of self-directed learning algorithms that make few mistakes.

Comments

Permanent URL: http://dx.doi.org/10.7936/K7GQ6W2R

Share

COinS