On Bayesian Transduction: Implications for the Covariate Shift Problem
On Bayesian Transduction: Implications for the Covariate Shift Problem
This chapter analyzes Bayesian supervised learning with extensions to semisupervised learning, and learning with covariate or dataset shift. The main result is an expression for the generalization optimal Bayesian procedure. The resulting “Bayesian transduction” average is optimal for a realizable model. For semisupervised learning, this implies that all available data, including unlabeled data, should be used in the likelihood, hence in forming the parameter posterior. In the case of covariate or dataset shift, the situation is contingent on the parameterization.
Keywords: Bayesian supervised learning, semisupervised learning, covariate shift, dataset shift
MIT Press Scholarship Online requires a subscription or purchase to access the full text of books within the service. Public users can however freely search the site and view the abstracts and keywords for each book and chapter.
Please, subscribe or login to access full text content.
If you think you should have access to this title, please contact your librarian.
To troubleshoot, please check our FAQs, and if you can't find the answer there, please contact us.