 Title Pages
 Series Foreword
 Preface

1 Introduction to SemiSupervised Learning 
1 A Taxonomy for SemiSupervised Learning Methods 
3 SemiSupervised Text Classification Using EM 
4 Risks of SemiSupervised Learning: How Unlabeled Data Can Degrade Performance of Generative Classifiers 
5 Probabilistic SemiSupervised Clustering with Constraints 
6 Transductive Support Vector Machines 
7 SemiSupervised Learning Using SemiDefinite Programming 
8 Gaussian Processes and the NullCategory Noise Model 
9 Entropy Regularization 
10 DataDependent Regularization 
11 Label Propagation and Quadratic Criterion 
12 The Geometric Basis of SemiSupervised Learning 
13 Discrete Regularization 
14 SemiSupervised Learning with Conditional Harmonic Mixing 
15 Graph Kernels by Spectral Transforms 
16 Spectral Methods for Dimensionality Reduction 
17 Modifying Distances 
18 LargeScale Algorithms 
19 SemiSupervised Protein Classification Using Cluster Kernels 
20 Prediction of Protein Function from Networks 
25 Analysis of Benchmarks 
22 An Augmented PAC Model for SemiSupervised Learning 
23 MetricBased Approaches for SemiSupervised Regression and Classification 
24 Transductive Inference and SemiSupervised Learning 
25 A Discussion of SemiSupervised Learning and Transduction  References
 Notation and Symbols
 Contributors
 Index
Label Propagation and Quadratic Criterion
Label Propagation and Quadratic Criterion
 Chapter:
 (p.192) (p.193) 11 Label Propagation and Quadratic Criterion
 Source:
 SemiSupervised Learning
 Author(s):
Bengio Yoshua
Delalleau Olivier
Roux Nicolas Le
 Publisher:
 The MIT Press
This chapter shows how the different graphbased algorithms for semisupervised learning can be cast into a common framework where one minimizes a quadratic cost criterion whose closedform solution is found by solving a linear system of size n (total number of data points). The cost criterion naturally leads to an extension of such algorithms to the inductive setting, where one obtains test samples one at a time: the derived induction formula can be evaluated in O(n) time, which is much more efficient than solving again exactly the linear system (which in general costs O(kn^{2}) time for a sparse graph where each data point has k neighbors). This inductive formula is also used to show that when the similarity between points satisfies a locality property, then the algorithms are plagued by the curse of dimensionality, with respect to the dimensionality of an underlying manifold.
Keywords: graphbased algorithms, semisupervised learning, quadratic cost criterion, derived induction formula, locality property, dimensionality
MIT Press Scholarship Online requires a subscription or purchase to access the full text of books within the service. Public users can however freely search the site and view the abstracts and keywords for each book and chapter.
Please, subscribe or login to access full text content.
If you think you should have access to this title, please contact your librarian.
To troubleshoot, please check our FAQs, and if you can't find the answer there, please contact us.
 Title Pages
 Series Foreword
 Preface

1 Introduction to SemiSupervised Learning 
1 A Taxonomy for SemiSupervised Learning Methods 
3 SemiSupervised Text Classification Using EM 
4 Risks of SemiSupervised Learning: How Unlabeled Data Can Degrade Performance of Generative Classifiers 
5 Probabilistic SemiSupervised Clustering with Constraints 
6 Transductive Support Vector Machines 
7 SemiSupervised Learning Using SemiDefinite Programming 
8 Gaussian Processes and the NullCategory Noise Model 
9 Entropy Regularization 
10 DataDependent Regularization 
11 Label Propagation and Quadratic Criterion 
12 The Geometric Basis of SemiSupervised Learning 
13 Discrete Regularization 
14 SemiSupervised Learning with Conditional Harmonic Mixing 
15 Graph Kernels by Spectral Transforms 
16 Spectral Methods for Dimensionality Reduction 
17 Modifying Distances 
18 LargeScale Algorithms 
19 SemiSupervised Protein Classification Using Cluster Kernels 
20 Prediction of Protein Function from Networks 
25 Analysis of Benchmarks 
22 An Augmented PAC Model for SemiSupervised Learning 
23 MetricBased Approaches for SemiSupervised Regression and Classification 
24 Transductive Inference and SemiSupervised Learning 
25 A Discussion of SemiSupervised Learning and Transduction  References
 Notation and Symbols
 Contributors
 Index