A discriminative model for semi-supervised learning
- 29 March 2010
- journal article
- research article
- Published by Association for Computing Machinery (ACM) in Journal of the ACM
- Vol. 57 (3), 1-46
- https://doi.org/10.1145/1706591.1706599
Abstract
Supervised learning—that is, learning from labeled examples—is an area of Machine Learning that has reached substantial maturity. It has generated general-purpose and practically successful algorithms and the foundations are quite well understood and captured by theoretical frameworks such as the PAC-learning model and the Statistical Learning theory framework. However, for many contemporary practical problems such as classifying web pages or detecting spam, there is often additional information available in the form of unlabeled data, which is often much cheaper and more plentiful than labeled data. As a consequence, there has recently been substantial interest in semi-supervised learning—using unlabeled data together with labeled data—since any useful information that reduces the amount of labeled data needed can be a significant benefit. Several techniques have been developed for doing this, along with experimental results on a variety of different learning problems. Unfortunately, the standard learning frameworks for reasoning about supervised learning do not capture the key aspects and the assumptions underlying these semi -supervised learning methods. In this article, we describe an augmented version of the PAC model designed for semi-supervised learning, that can be used to reason about many of the different approaches taken over the past decade in the Machine Learning community. This model provides a unified framework for analyzing when and why unlabeled data can help, in which one can analyze both sample-complexity and algorithmic issues. The model can be viewed as an extension of the standard PAC model where, in addition to a concept class C , one also proposes a compatibility notion: a type of compatibility that one believes the target concept should have with the underlying distribution of data. Unlabeled data is then potentially helpful in this setting because it allows one to estimate compatibility over the space of hypotheses, and to reduce the size of the search space from the whole set of hypotheses C down to those that, according to one's assumptions, are a-priori reasonable with respect to the distribution. As we show, many of the assumptions underlying existing semi-supervised learning algorithms can be formulated in this framework. After proposing the model, we then analyze sample-complexity issues in this setting: that is, how much of each type of data one should expect to need in order to learn well, and what the key quantities are that these numbers depend on. We also consider the algorithmic question of how to efficiently optimize for natural classes and compatibility notions, and provide several algorithmic results including an improved bound for Co-Training with linear separators when the distribution satisfies independence given the label.Keywords
Funding Information
- Division of Information and Intelligent Systems (IIS-0312814CCR-0105488)
- National Science Foundation (IIS-0312814CCR-0105488)
This publication has 34 references indexed in Scilit:
- Theory of Classification: a Survey of Some Recent AdvancesESAIM: Probability and Statistics, 2005
- Rademacher penalties and structural risk minimizationIEEE Transactions on Information Theory, 2001
- 10.1162/153244302760185243Applied Physics Letters, 2000
- Efficient noise-tolerant learning from statistical queriesJournal of the ACM, 1998
- Structural risk minimization over data-dependent hierarchiesIEEE Transactions on Information Theory, 1998
- The relative value of labeled and unlabeled samples in pattern recognition with an unknown mixing parameterIEEE Transactions on Information Theory, 1996
- On the exponential value of labeled samplesPattern Recognition Letters, 1995
- Learnability and the Vapnik-Chervonenkis dimensionJournal of the ACM, 1989
- A general lower bound on the number of examples needed for learningInformation and Computation, 1989
- A theory of the learnableCommunications of the ACM, 1984