On the Training/Test Distributions Gap: A Data Representation Learning Framework
On the Training/Test Distributions Gap: A Data Representation Learning Framework
This chapter discusses some dataset shift learning problems from a formal, statistical point of view. It provides definitions for “multitask learning,” “inductive transfer,” and “domain adaptation,” and discusses the parameters along which such learning scenarios may be taxonomized. The chapter then focuses on one concrete setting of domain adaptation and demonstrates how error bounds can be derived for it. These bounds can be reliably estimated from finite samples of training data, and do not rely on any assumptions concerning similarity between the domain from which the labeled training data is sampled and the target (or test) data. However, they are relative to the performance of some optimal classifier, rather than providing any absolute performance guarantee.
Keywords: dataset shift learning, multitask learning, inductive transfer, domain adaptation, error bounds
MIT Press Scholarship Online requires a subscription or purchase to access the full text of books within the service. Public users can however freely search the site and view the abstracts and keywords for each book and chapter.
Please, subscribe or login to access full text content.
If you think you should have access to this title, please contact your librarian.
To troubleshoot, please check our FAQs, and if you can't find the answer there, please contact us.