Model Selection
Model Selection
This chapter addresses the problem of model selection. The success of machine learning techniques depends heavily on the choice of hyperparameters such as basis functions, the kernel bandwidth, the regularization parameter, and the importance-flattening parameter. Thus, model selection is one of the most fundamental and crucial topics in machine learning. Standard model selection schemes such as the Akaike information criterion, cross-validation, and the subspace information criterion have their own theoretical justification in terms of the unbiasedness as generalization error estimators. However, such theoretical guarantees are no longer valid under covariate shift. The chapter introduces their modified variants using importance-weighting techniques, and shows that the modified methods are properly unbiased even under covariate shift. The usefulness of these modified model selection criteria is illustrated through numerical experiments.
Keywords: machine learning, model selection, importance-weighting techniques, Akaike information criterion, covariate shift
MIT Press Scholarship Online requires a subscription or purchase to access the full text of books within the service. Public users can however freely search the site and view the abstracts and keywords for each book and chapter.
Please, subscribe or login to access full text content.
If you think you should have access to this title, please contact your librarian.
To troubleshoot, please check our FAQs, and if you can't find the answer there, please contact us.