 Title Pages
 Series Foreword
 Preface

1 Introduction to SemiSupervised Learning 
1 A Taxonomy for SemiSupervised Learning Methods 
3 SemiSupervised Text Classification Using EM 
4 Risks of SemiSupervised Learning: How Unlabeled Data Can Degrade Performance of Generative Classifiers 
5 Probabilistic SemiSupervised Clustering with Constraints 
6 Transductive Support Vector Machines 
7 SemiSupervised Learning Using SemiDefinite Programming 
8 Gaussian Processes and the NullCategory Noise Model 
9 Entropy Regularization 
10 DataDependent Regularization 
11 Label Propagation and Quadratic Criterion 
12 The Geometric Basis of SemiSupervised Learning 
13 Discrete Regularization 
14 SemiSupervised Learning with Conditional Harmonic Mixing 
15 Graph Kernels by Spectral Transforms 
16 Spectral Methods for Dimensionality Reduction 
17 Modifying Distances 
18 LargeScale Algorithms 
19 SemiSupervised Protein Classification Using Cluster Kernels 
20 Prediction of Protein Function from Networks 
25 Analysis of Benchmarks 
22 An Augmented PAC Model for SemiSupervised Learning 
23 MetricBased Approaches for SemiSupervised Regression and Classification 
24 Transductive Inference and SemiSupervised Learning 
25 A Discussion of SemiSupervised Learning and Transduction  References
 Notation and Symbols
 Contributors
 Index
SemiSupervised Learning with Conditional Harmonic Mixing
SemiSupervised Learning with Conditional Harmonic Mixing
 Chapter:
 (p.250) (p.251) 14 SemiSupervised Learning with Conditional Harmonic Mixing
 Source:
 SemiSupervised Learning
 Author(s):
Burges Christopher J. C.
Platt John C.
 Publisher:
 The MIT Press
This chapter introduces a general probabilistic formulation called conditional harmonic mixing (CHM), in which the links are directed, a conditional probability matrix is associated with each link, and where the numbers of classes can vary from node to node. The posterior class probability at each node is updated by minimizing the KullbackLeibler (KL) divergence between its distribution and that predicted by its neighbors. It is shown here that for arbitrary graphs, as long as each unlabeled point is reachable from at least one training point, a solution always exists, is unique, and can be found by solving a sparse linear system iteratively. This result holds even if the graph contains loops, or if the conditional probability matrices are not consistent. It is also shown how CHM can learn its transition probabilities. Using the Reuters database, it is shown here that CHM improves the accuracy of the best available classifier.
Keywords: general probabilistic formulation, conditional harmonic mixing, CHM, KullbackLeibler divergence, KL, Reuters database
MIT Press Scholarship Online requires a subscription or purchase to access the full text of books within the service. Public users can however freely search the site and view the abstracts and keywords for each book and chapter.
Please, subscribe or login to access full text content.
If you think you should have access to this title, please contact your librarian.
To troubleshoot, please check our FAQs, and if you can't find the answer there, please contact us.
 Title Pages
 Series Foreword
 Preface

1 Introduction to SemiSupervised Learning 
1 A Taxonomy for SemiSupervised Learning Methods 
3 SemiSupervised Text Classification Using EM 
4 Risks of SemiSupervised Learning: How Unlabeled Data Can Degrade Performance of Generative Classifiers 
5 Probabilistic SemiSupervised Clustering with Constraints 
6 Transductive Support Vector Machines 
7 SemiSupervised Learning Using SemiDefinite Programming 
8 Gaussian Processes and the NullCategory Noise Model 
9 Entropy Regularization 
10 DataDependent Regularization 
11 Label Propagation and Quadratic Criterion 
12 The Geometric Basis of SemiSupervised Learning 
13 Discrete Regularization 
14 SemiSupervised Learning with Conditional Harmonic Mixing 
15 Graph Kernels by Spectral Transforms 
16 Spectral Methods for Dimensionality Reduction 
17 Modifying Distances 
18 LargeScale Algorithms 
19 SemiSupervised Protein Classification Using Cluster Kernels 
20 Prediction of Protein Function from Networks 
25 Analysis of Benchmarks 
22 An Augmented PAC Model for SemiSupervised Learning 
23 MetricBased Approaches for SemiSupervised Regression and Classification 
24 Transductive Inference and SemiSupervised Learning 
25 A Discussion of SemiSupervised Learning and Transduction  References
 Notation and Symbols
 Contributors
 Index