Jump to ContentJump to Main Navigation
Dataset Shift in Machine Learning$

Joaquin Quiñonero-Candela, Masashi Sugiyama, Anton Schwaighofer, and Neil D. Lawrence

Print publication date: 2008

Print ISBN-13: 9780262170055

Published to MIT Press Scholarship Online: August 2013

DOI: 10.7551/mitpress/9780262170055.001.0001

Show Summary Details
Page of

PRINTED FROM MIT PRESS SCHOLARSHIP ONLINE (www.mitpress.universitypressscholarship.com). (c) Copyright The MIT Press, 2020. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in MITSO for personal use. Subscriber: null; date: 06 April 2020

Discriminative Learning under Covariate Shift with a Single Optimization Problem

Discriminative Learning under Covariate Shift with a Single Optimization Problem

Chapter:
(p.161) 9 Discriminative Learning under Covariate Shift with a Single Optimization Problem
Source:
Dataset Shift in Machine Learning
Author(s):

Bickel Amir

Brückner Michael

Scheffer Tobias

Publisher:
The MIT Press
DOI:10.7551/mitpress/9780262170055.003.0009

Abstract and Keywords

This chapter derives a discriminative model for learning under differing training and test distributions, and is organized as follows. Section 9.2 formalizes the problem setting. Section 9.3 reviews models for different training and test distributions. Section 9.4 introduces the discriminative model, and Section 9.5 describes the joint optimization problem. Primal and kernelized classifiers are derived for various training and test distributions in Sections 9.6 and 9.7. Section 9.8 analyzes the convexity of the integrated optimization problem. Section 9.9 provides empirical results, and Section 9.10 concludes.

Keywords:   covariate shift adaptation, training, test distributions, discriminative model, joint optimization, kernelize learning

We address classification problems for which the training instances are governed by a distribution that is allowed to differ arbitrarily from the test distribution—problems also referred to as classification under covariate shift. We derive a solution that is purely discriminative: neither training nor test distribution is modeled explicitly. We formulate the general problem of learning under covariate shift as an integrated optimization problem and instantiate a kernel logistic regression and an exponential model classifier for differing training and test distributions. We show under which condition the optimization problem is convex. We study the method empirically on problems of spam filtering, text classification, and land mine detection.

9.1 Introduction

Most machine learning algorithms are constructed under the assumption that the training data is governed by the exact same distribution which the model will later be exposed to. In practice, control over the data generation process is often less than perfect. Training data may be obtained under laboratory conditions that cannot be expected after deployment of a system; spam filters may be used by individuals whose distribution of inbound emails diverges from the distribution reflected in public training corpora; image processing systems may be deployed to foreign geographic regions where vegetation and lighting conditions result in a distinct distribution of input patterns.

The case of distinct training and test distributions in a learning problem has been referred to as covariate shift and sample selection bias—albeit the term sample selection bias actually refers to a case in which each training instance is originally (p.162) drawn from the test distribution, but is then selected into the training sample with some probability, or discarded otherwise.

The covariate shift model and the missing at random case in the sample selection bias model allow for differences between the training and test distribution of instances; the conditional distribution of the class variable given the instance is constant over training and test set.

In discriminative learning tasks such as classification, the classifier’s goal is to produce the correct output given the input. It is widely accepted that this is best performed by discriminative learners that directly maximize a quality measure of the produced output. Model-based optimization criteria such as the joint likelihood of input and output, by contrast, additionally assess how well the classifier models the distribution of input values. This amounts to adding a term to the criterion that is irrelevant for the task at hand.

We contribute a discriminative model for learning under arbitrarily different training and test distributions. The model directly characterizes the divergence between training and test distribution, without the intermediate – intrinsically model-based – step of estimating training and test distribution. We formulate the search for all model parameters as an integrated optimization problem. This complements the predominant procedure of first estimating the bias of the training sample, and then learning the classifier on a weighted version of the training sample. We show that the integrated optimization can be convex, depending on the model type; it is convex for the exponential model. We derive a Newton gradient descent procedure, leading to a kernel logistic regression and an exponential model classifier for covariate shift.

After formalizing the problem setting in section 9.2, we review models for differing training and test distributions in section 9.3. Section 9.4 introduces our discriminative model [Bickel et al., 2007] and section 9.5 describes the joint optimization problem. We derive primal and kernelized classifiers for differing training and test distributions in sections 9.6 and 9.7. In section 9.8, we analyze the convexity of the integrated optimization problem. Section 9.9 provides empirical results and section 9.10 concludes.

9.2 Problem Setting

In the covariate shift problem setting, a labeled training sample Xtr = ((x1), …, xntr with labels ytr = ((y1), …, yntr is available. This training sample is governed by an unknown distribution p(x|λ); labels are drawn according to an unknown target concept p(y|x). In addition, an unlabeled test set Xte =(xntr+1, …, xntr+nte) becomes available. The test set is governed by a different unknown distribution, p(x|θ). Training and test distribution may differ arbitrarily, but there is only one unknown target conditional class distribution p(y|x).

The goal is to find a classifier Discriminative Learning under Covariate Shift with a Single Optimization Problem and to predict the missing labels yntr+1, …,yntr+nte for the test instances. From a purely transductive perspective, (p.163) the classifier can even be seen as an auxiliary step and may be discarded after the labels yntr+1, …,yntr+nte have been conceived. The classifier should in any case perform well on the test data; that is, it should minimize some loss function E(x, y)∼θ[ℓ(f(x), y)] that is defined with respect to the unknown test distribution p(x|θ).

Note that directly training f on the training data Xtr would minimize the loss with respect to p(x|λ). The minimum of this optimization problem will not generally coincide with the minimal loss on p(x|θ).

9.3 Prior Work

If training and test distributions were known, then the loss on the test distribution could be minimized by weighting the loss on the training distribution with an instance-specific factor. Proposition 9.1 [Shimodaira, 2000] illustrates that the scaling factor has to be p(x|θ)p(x|λ).

Proposition 9.1 The expected loss with respect to 9 equals the expected loss with respect to λ with weights p(x|θ)p(x|λ) for the loss incurred by each x, provided that the support of p(x|θ) is contained in the support of p(x|λ):

E(x,y)~θ[ (f(x), y) ] = E(x,y)~λ[ p(x|θ)p(x|λ)(f(x), y) ].
(9.1)

After expanding the expected value into its integral  (f(x), y)p(xy|θ)dθ, the joint distribution p(x,y|λ) is decomposed into p(x|λ)p(y|x, θ). Since p(y|x, λ) = p(y|x) = p(y|x, θ) is the global conditional distribution of the class variable given the instance, proposition 9.1 follows. All instances x with positive p(x|θ) are integrated over. Hence, (9.1) holds as long as each x with positive p(x|θ) also has a positive p(x|λ); otherwise, the denominator vanishes. This shows that covariate shift can only be compensated for as long as the training distribution covers the entire support of the test distribution. If a test instance had zero density under the training distribution, the test-to-training density ratio which it would need to be scaled with would incur a zero denominator.

Both, p(x|θ) and p(x|λ) are unknown, but p(x|θ) is reflected in Xte, as is p(x|λ) in Xtr. A straightforward approach to compensating for covariate shift is to first obtain estimates p^(x|θ) and p^(x|λ) from the test and training data, respectively, using kernel density estimation [Shimodaira, 2000; Sugiyama and Müller, 2005b], (see also chapter 7). In a second step, the estimated density ratio is used to re-sample the training instances, or to train with weighted examples.

This method decouples the problem. First, it estimates training and test distributions. This step is intrinsically model-based and only loosely related to the ultimate goal of accurate classification. In a subsequent step, the classifier is derived given fixed weights. Since the parameters of the final classifier and the parameters that (p.164) control the weights are not independent, this decomposition into two optimization steps cannot generally find the optimal setting of the joint parameter vector.

A line of work on learning under sample selection bias has meandered from the statistics and econometrics community into machine learning [Heckman, 1979; Zadrozny, 2004]. Sample selection bias relies on a model of the data generation process. Test instances are drawn under p(x|θ). Training instances are drawn by first sampling x from the test distribution p(x|θ). A selector variable s then decides whether x is moved into the training set (s = 1) or moved into the rejected set (s = 0). For instances in the training set (s = 1) a label is drawn from p(y|x); for the instances in the rejected set the labels are unknown. A typical scenario for sample selection bias is credit scoring. The labeled training sample consists of customers who were given a loan in the past and the rejected sample are customers that asked for but were not given a loan. New customers asking for a loan reflect the test distribution.

The distribution of the selector variable maps the test onto the training distribution:

p(x|λ)  p(x|θ)p(s = 1|x, θ, λ).
(9.2)

Proposition 9.2 [Zadrozny, 2004; Bickel and Scheffer, 2007] says that minimizing the loss on instances weighted by p(s|x, θ, λ)−1 in fact minimizes the expected loss with respect to θ.

Proposition 9.2 The expected loss with respect to θ is proportional to the expected loss with respect to λ with weights p(s = 1||x, θ, λ)−1 for the loss incurred by each x, provided that the support of p(x|θ) is contained in the support of p(x|λ).

E(x, y)~θ[ (f(x), y) ]  E(x, y)~λ[ 1p(s = 1|x, θ, λ)(f(x), y) ] .
(9.3)

When the model is implemented, p(s = 1|x, θ, λ) is learned by discriminating the training against the rejected examples; in a second step the target model is learned by following proposition 9.2 and weighting training examples by p(s|x, θ, λ)−1. No test examples drawn directly from p(x|θ) are needed to train the model; only labeled selected and unlabeled rejected examples are required. This is in contrast to the covariate shift model that requires samples drawn from the test distribution, but no selection process is assumed and no rejected examples are needed.

Propensity scores [Rosenbaum and Rubin, 1983; Lunceford and Davidian, 2004] are applied in settings related to sample selection bias; the training data is again assumed to be drawn from the test distribution p(x|θ) followed by a selection process. The difference from the sample selection bias setting is that the selected and the rejected examples are labeled. Weighting the selected examples by the inverse of the propensity score p(s = 1|x, λ, θ)−1 and weighting the rejected examples by p(s = 0|x, λ, θ)−1 results in two unbiased samples with respect to the test distribution.

(p.165) Propensity scoring can precede a variety of analysis steps. This can be the training of a target model on reweighted data or just a statistical analysis of the two reweighted samples. A typical application for propensity scores is the analysis of the success of a medical treatment. Patients are selected to be given the treatment and some other patients are selected into the control group. If the selection is not randomized the outcome (e.g., ratio of cured patients) of the two groups cannot be compared directly and propensity scores can be applied.

Maximum entropy density estimation under sample selection bias has been studied by Dudík et al. [2006]. Bickel and Scheffer [2007] impose a Dirichlet process prior on several learning problems with related sample selection bias. Elkan [2001] and Japkowicz and Stephen [2002] investigate the case of training data that is only biased with respect to the class ratio; this can be seen as sample selection bias where the selection only depends on y.

Kernel mean matching (Gretton et al. in chapter 8) is a two-step method that first finds weights for the training instances such that the first momentum of training and test sets—i.e., their mean value—matches in feature space. The subsequent training step uses these weights. The procedure requires a universal kernel. Matching the means in feature space is equivalent to matching all moments of the distributions if a universal kernel is used.

Φ(⋅) is a mapping into a feature space and B is a regularization parameter. Gretton et al. in chapter 8 derive a quadratic program from (9.4) that can be solved with standard optimization tools:

minα1ntr i=1ntr αiΦ(xi)  1nte i=ntr+1ntr+nte Φ(xi)2subject toαi  [ 0, B ] and |1ntr i=1ntrαi  1|  .
(9.4)

9.4 Discriminative Weighting Factors

In this section, we derive a purely discriminative model that directly estimates weights for the training instances. No distributions over instances are modeled explicitly. We first introduce a selector variable σ: For each element x of the training set, selector variable σ = 1 indicates that it has been drawn into Xtr. For each x in the test data, σ = −1 indicates that it has been drawn into the test set. The probability p(σ = 1|x, θ, λ) has the following intuitive meaning: Given that an instance x has been drawn at random from the bag XtrXte of training and test set, the probability that x originates from Xtr is p(σ = 1|x, θ, λ). Hence, the value of σ is observable for all training (σ = 1) and test (σ = −1) instances. The dependency between the instances and σ is undirected; neither training nor test set is assumed to be generated from the other sample.

In the following equations we will derive a discriminative expression for p(x|θ)p(x|λ) which will no longer include any density on instances. When p(σ = −1) = 0 – which (p.166) is implied by the test set not being empty – then the definition of σ allows us to rewrite the test distribution as p(x|θ) = p(x|λ = −1, θ). Since test instances are only dependent on parameter θ but not on parameter λ, equation p(x|σ = − 1, θ) = p(x|σ = − 1, θ) follows. By an analogous argument, p(x|θ = p(x|σ =1, θ, λ) when p(σ = 1) ≠ 0. This implies (9.5).

In (9.6) the Bayes rule is applied twice; the two terms of p(x|θ, λ) cancel each other out in (9.7). Since p(σ = −1|x, θ, λ) = 1 − p(σ = 1|x, θ, λ), (9.8) follows. The conditional p(σ = 1|x, θ, λ) discriminates training (σ = 1) against test instances (σ = −1).

p(x|θ)p(x|λ) = p(x|σ = 1, θ, y)1p(x|σ = 1, θ, y)
(9.5)
= p(σ = 1|x, θ, λ)p(x|θ, λ)p(σ = 1|θ, y) p(σ = 1|θ, y)p(σ = 1|x, θ, λ)p(x|θ, λ)
(9.6)
= p(σ = 1|θ, λ)p(σ = 1|θ, λ) p(σ = 1|x, θ, λ)p(σ = 1|x, θ, λ)
(9.7)
= p(σ = 1|θ, λ)p(σ = 1|θ, λ) (1p(σ = 1|x, θ, λ)  1) .
(9.8)

The significance of (9.8) is that it shows how the optimal example weights, the test-to-training ratio p(x|θ)p(x|λ), can be determined without knowledge of either training or test density. The right-hand side of (9.8) can be evaluated based on a model that discriminates training from test examples and outputs how much more likely an instance is to occur in the test data than it is to occur in the training data. Instead of potentially high-dimensional densities p(x|θ) and p(x|λ), a conditional distribution of the single binary variable σ needs to be modeled.

Equation (9.8) leaves us with the problem of estimating a parametric model p(σ = 1|x, v) of p(σ = 1|x, θ, λ). Such a model would predict test-to-training density ratios for the training data in L according to (9.8). In the following, we will derive the optimization problem that simultaneously determines parameters v of the test-to-training ratios and parameters w of the target classifier.

9.5 Integrated Model

Our goal is to find a classifier f which minimizes the expected loss under the test distribution. To this end, the best conceivable approximation is given by the Bayes decision based on all data available (9.9). For each test instance x, the Bayes rule decides on the class which minimizes the expected loss given x and all available data (9.10),

argminfE(x,y)~θ[ (f(x), y) ]  fBayes(x;Xtr,Xte)
(9.9)
with fBayes(x;Xtr,Xte) = argminy y(y, y)p(y|x, Xtr,Xte) .
(9.10)

(p.167) Let w be the parameters of a classification function p(y|x, w) and let v parameterize a model p(σ = 1|x, v) that characterizes the training-test difference. The Bayes decision is obtained by Bayesian model averaging—i.e., by integrating over all model parameters in (9.11),

p(y|x,Xtr,Xte) =  p(y|x, w)p(w, v|Xtr,Xte)dvdw .
(9.11)

(9.11) exploits that class-label posterior p(y|x, w) is conditionally independent of the parameters v of the test-to-training ratio given w, and also conditionally independent of the data given its parameters w. Bayesian model averaging (9.11) is usually computationally infeasible. The integral is therefore approximated by the single assignment of values the parameters which maximizes it, the MAP estimator. In our case, the MAP estimator naturally assigns values to all parameters, w and v (9.13):

fMAP(x;Xtr,Xte) = argmaxyp(y|x, wMAP)
(9.12)
with (wMAP, vMAP) = maxw,vp(w, v|Xtr,Xte)
(9.13)
= maxw,vp(w|v,Xtr,Xte)p(v|Xtr,Xte)
(9.14)
= maxw,vp(w|v,Xtr)p(v|Xtr,Xte)
(9.15)
 maxw,vp(Xtr|w,v)p(Xtr,Xte|v)p(w)p(v)
(9.16)

Equation (9.14) factorizes the joint posterior; (9.15) exploits that w is conditionally independent of the test data when the training-test difference v is given. Equation 9.16 applies the Bayes rule and shows that the posterior can be factorized into a likelihood function of the training data given the model parameters P(Xtr|w, v), a likelihood function of the observed selection variables σ—written P(Xtr, Xte|v)—and the priors on the model parameters.

The class-label posterior p(y|x, wMAP) is conditionally independent of vMAP given wMAP. However, wMAP and vMAP are dependent. Assigning a single MAP value to [w, v] instead of integrating over all values corresponds to the common approximation of the Bayes decision rule by a MAP hypothesis. However, sequential maximization of p(v|Xtr,Xte) over parameters v followed by maximization of p(w|v, Xtr) with fixed v over parameters w would amount to an additional degree of approximation and will not generally coincide with the maximum of the product in (9.14).

We will now discuss the likelihood functions P(Xtr|w, v) and P(Xtr, Xte|v). Since our goal is discriminative training, the likelihood function P(Xtr|w) (not taking training-test difference v into account) would be i p(yi|xi, w). Intuitively, pXpy dictates how many times, on average, x should occur in Xtr if Xtr was governed by the test distribution θ. When the individual conditional likelihood of x is p(y|x, w), then the likelihood of p(x|θ)p(x|λ) occurrences of x is p(y|x,w)p(x|θ)p(x|λ). Using a parametric model p(σ|x, v), according to (9.8) the test-to-training ratio p(x|θ)p(x|λ) can (p.168) be expressed as

p(y|x,w)p(x|θ)p(x|λ)

Therefore, we define the likelihood function as

p(Xtr|w,v) = (i=1ntrp(yi|xi;w)p(σ=1|v)p(σ=1|v) (1p(σi=1|xi;v)  1)) .
(9.17)

As an immediate corollary of Manski and Lerman [1977], the likelihood function of (9.17) has the property that when the true value v* is given, its maximizer over w is a consistent estimator of the true parameter w* that has produced labels for the test data under the test distribution θ. That is, as the sample grows, the maximizer of (9.17) converges in probability to the true value w* of parameter w.

The likelihood function P(Xtr, Xte|v) resolves to P(σi = 1|xi; v) for all training instances and P(σi = −1|xi; v) for all test instances:

p(Xtr,Xte|v) = (i=1ntrp(σi = 1|xi;v) i=ntr+1ntr+ntep(σi = 1|xi;v) ) .
(9.18)

Equation (9.19) summarizes (9.13) through (9.18). Equation (9.20) inserts the likelihood models (9.17) and (9.18) and draws constants p(σ = 1|v) and p(σ = −1|v) out of the product.

p(Xtr,Xte|v) = (i=1ntrp(σi = 1|xi;v) i=ntr+1ntr+ntep(σi = 1|xi;v) ) .
(9.19)
= (i=1ntrp(y|xi;w)1p(σi=1|xi;v)  1)p(σ=1|v)p(σ=1|v) (i=1ntrp(σi = 1|xi;v) i=ntr+1ntr+ntep(σi = 1|xi;v))p(w)p(v) . 
(9.20)

Out of curiosity, let us briefly consider the extreme case of disjoint training and test distributions, i.e., p(x|θ)p(x|λ) = 0 for all x. In this case, the second factor is maximized by a v that assigns p(σ = 1|x; v) = 1 for all elements of Xtr (subject to a possible regularization imposed by p(v)). Hence, the likelihood of the training data = (i=1ntrp(y|xi;w)1p(σi=1|xi;v)  1)p(σ=1|v)p(σ=1|v) (i=1ntrp(σi = 1|xi;v) i=ntr+1ntr+ntep(σi = 1|xi;v))p(w)p(v) .  equals 1 for all possible classifiers w. The choice of the classifier w is thus determined solely by the inductive bias p(w). This result makes perfect sense because the training sample contains no information about the test distribution.

Using a logistic model for p(σ = 1|x, v), we notice that (9.8) can be simplified as in (9.21):

p(σ=1|v)p(σ=1|v)(11/(1+exp(vΤx))  1) = p(σ=1|v)p(σ=1|v)exp(vx) .
(9.21)

Optimization problem (9.3) is derived from (9.20) in logarithmic form, using linear models vxi and wxi and a logistic model for p(σ = 1|x, v). Negative (p.169) log-likelihoods are abbreviated w(yiwxi) = −logp(yi|xi; w) and v(σvxi) = −logp(σi|xi; v), respectively; this notation emphasizes the duality between likelihoods and empirical loss functions. The regularization terms correspond to Gaussian priors on v and w with variances sV2 and sw2.

Optimization Problem 9.3 Over all w and v, minimize

i=1ntrp(σ=1|v)p(σ=1|v)exp(vxi)w(yiwxi)+i=1ntrv(σivxi)+12sw2ww+12sv2vv .

9.6 Primal Learning Algorithm

We derive a Newton gradient method that directly minimizes optimization problem 9.3 in the attribute space. To this end, we need to derive the gradient and the Hessian of the objective function. The update rule assumes the form of a set of linear equations that have to be solved for the update vector [Δv, ΔW]. It depends on the current parameters [v, w], all combinations of training and test data, and resulting coefficients. In order to express the update rule as a single equation in matrix form, we define

X = [ XtrXte000Xtr ] ,
(9.22)

where Xtr and Xte are the matrices of training vectors, and test vectors respectively. We abbreviate

v,i=v(σivxi); v,iσixij=v(σivxi)vj; v,ixijxik=2v(σivxi)vjvk;
(9.23)
w,i=w(yiwxi); w,iyixij=w(yiwxi)wj; w,ixijxik=2w(yiwxi)wjwk;
(9.24)
ωi=p(σ=1|v)p(σ=1|v)exp(vxi) ,
(9.25)

and denote the objective function of optimization problem 9.3 by

F(v,w,Xtr,Xte)=i=1ntrwilw,i+i=1ntr+ntelv,i+12sw2wTw+12sv2vTv.
(9.26)

(p.170) We compute the gradient with respect to v and w.

F(v,w,Xtr,Xte)vj = i=1ntrωiw,ixij+ i=1ntr+ntev,iσixij + 1sv2vj ,
(9.27)
F(v,w,Xtr,Xte)wj = i=1ntrωiw,iyixij +1sw2wj .
(9.28)

The Hessian is the matrix of second derivatives.

2F(v,w,Xtr,Xte)vjvk = i=1ntrωiw,ixijxik+ i=1ntr+ntev,ixijxik+1sv2δjk,
(9.29)
2F(v,w,Xtr,Xte)vjwk = i=1ntrωiw,iyixijxik ,
(9.30)
2F(v,w,Xtr,Xte)wjwk = i=1ntrωiw,ixijxik+ 1sw2δjk .
(9.31)

We can rewrite gradient as Xg + S[v, w] and Hessian as XΛX + S using the following definitions, g = [g(1), g(2), g(3)], S=[ sv00sw ] with

gi(1) = ωiw,i + v,i for i = 1,  , ntr,
(9.32)
gi(2) = v,ntr+i for i = 1,  , nte,
(9.33)
gi(3) = ωiw,iyi for i = 1,  , ntr,
(9.34)
Si,iv = sv2 for i = 1,  , dim(Xte),
(9.35)
Si,iw = sw2 for i = 1,  , dim(Xtr),
(9.36)
Λ = [  diagi=1..ntr(ωiw,i+v,i)0  diagi=1..ntr(ωiw,iyi)0 diagi=1..nte(v,ntr+i)0  diagi=1..ntr(ωiw,iyi)0 diagi=1..ntr(ωiw,i) ] .
(9.37)

The update step for the Newton gradient descent minimization of optimization problem 9.3 is [v′, w′] ← [v, w] + [Δv, Δw] with

(XΛX + S) [ ΔvΔw ] = Xg  S[ vw ] .
(9.38)

Given the parameter w, a test instance x is classified as f(x;w) = sign(wx) (p.171)

9.7 Kernelized Learning Algorithm

We derive a kernelized version of the integrated classifier for differing training and test distributions. A transformation Φ maps instances into a target space in which a kernel function k(xi, xj) calculates the inner product Φ(xi)Φ(xj).

The update rule (9.38) thus becomes

(Φ(X)ΛΦ(X) + S) [ ΔvΔw ] = Φ(X)g  S[ vw ] .
(9.39)

Φ(X) is defined by

Φ(X) = [ Φ(Xtr)Φ(Xte)000Φ(Xtr) ] .
(9.40)

According to the representer theorem, the optimal separator is a linear combination of examples. Parameter vectors α and β in the dual space weight the influence of all examples:

[ vw ] = Φ(X)[ αβ ]
(9.41)

Equation 9.39 can therefore be rewritten as (9.42). We now multiply Φ(X) from the left to both sides and obtain (9.43). We replace all resulting occurrences of Φ(X)Φ(X) by the kernel matrix K and arrive at (9.44); S is replaced by S′ such that Φ(xi)Φ(xj), i.e., Φ(xi)Φ(xj) for i = 1..ntr + nte and Sntr+nte+i,ntr+nte+i = sw2 for i = 1..ntr. Equation 9.44 is satisfied when (9.45) is satisfied. Equation 9.45 is the update rule for the dual Newton gradient descent.

(Φ(X)ΛΦ(X) + S)Φ(X) [ ΔαΔβ ] = Φ(X)g  SΦ(X)[ αβ ] ,
(9.42)
Φ(X)( Φ(X)ΛΦ(X) +  S )Φ(X) [ ΔαΔβ ] = Φ(X)Φ(X)g  Φ(X)SΦ(X)[ αβ ] ,
(9.43)
Φ(X)( Φ(X)ΛΦ(X) +  S )Φ(X) [ ΔαΔβ ] = Φ(X)Φ(X)g  Φ(X)SΦ(X)[ αβ ] ,
(9.44)
(ΛK + S) [ ΔαΔβ ] = g  S[ αβ ] .
(9.45)

Given the parameters, test instance x is classified by f(x;α) = sign(i=1ntrβik(x,xi)). (p.172)

9.8 Convexity Analysis and Solving the Optimization Problems

The following theorem specifies the conditions for convexity of optimization problem 9.3. With this theorem we can easily check whether the integrated classifier for covariate shift is convex for specific models of the negative log-likelihood functions. The negative log-likelihood function w itself and its first and second derivatives are needed.

Theorem 9.4 Optimization problem 9.3 is in general convex if v is convex and

w,iw,i  w,i2  0.
(9.46)

Proof: Looking at optimization problem 9.3 we immediately see that the regularizers are convex and if v is convex the second term is convex as well; we only need to analyze the convexity of the last term

i=1ntrp(σ=1|v)p(σ=1|v)exp(vxi)w(yiwxi) = i=1ntrωiw,i.
(9.47)

A function is convex if the Hessian is positive semidefinite and this is the case if and only if

aTHa 0
(9.48)

for all vectors a and Hessian H.

With the notation of section 9.6 the Hessian of (9.47) is

[ Xtr00Xtr ][ diag1..dim(Xtr)ωiw,idiag1..dim(Xtr)ωiw,iyidiag1..dim(Xtr)ωiw,iyidiag1..dim(Xtr)ωiw,i ] [ Xtr00Xtr ] .
(9.49)

Using the condition of (9.48) the Hessian is positive semidefinite if the following matrix is positive semidefinite:

[ diag1..dim(Xtr)w,idiag1..dim(Xtr)w,iyidiag1..dim(Xtr)w,iyidiag1..dim(Xtr)w,i ] .
(9.50)

Applying (9.48) and splitting a into two equally sized subvectors a = [a1 a2], the condition for convexity is

w,ia1a1  2w,iyia1a2 + w,ia2a2  0.
(9.51)

Multiplication of (9.51) with w,i and adding and subtracting w,i leads to (9.52). Equation 9.53 holds by the binomial theorem. For w,i the term w,ia1  w,iyia22 takes its minimum value zero; this means (9.53) is nonnegative (p.173) for arbitrary a1 and a2 if (9.54) is nonnegative.

w,i2a1a1  2w,iw,iyia1a2 + w,iw,ia2a2 + w,i2yi2a2a2  w,i2yi2a2a2  0 ,
(9.52)
w,ia1  w,iyia2 2 + a2a2(w,iw,i  w,i2)  0 ,
(9.53)
w.iw,i  w,i2  0 .
(9.54)

In order to check the optimization criterion (optimization problem 9.3) for convexity we need to choose models of the negative log-likelihood v and w and derive their first and second derivatives. These derivations are also needed to actually minimize the optimization criterion with the Newton update steps derived in the last section.

For the model of the covariate shift we use a logistic model v(σivx) = log(1 + exp(σivx)) the abbreviations of section 9.6 can now be expanded:

lv,iσixij=exp(σivTxi)1+exp(σivTxi)σixij;lv,ixijxik=exp(σivTxi)(1+exp(σivTxi))2xijxik.
(9.55)

For the model of the target classifier we detail the derivations for logistic and for exponential models of ℓw. For the logistic model the derivatives of ℓw are the same as for ℓv, only v needs to be replaced by w and σi by yi. For an exponential model with w(yiwx)=exp(yiwx) the abbreviations are expanded as follows:

w,iyixij = exp(yiwxi)yixij; w,ixijxik = exp(yiwxi)xijxik .
(9.56)

Using theorem 9.4 we can now easily check the convexity of the integrated classifier with logistic model and with exponential model of w.

Corollary 9.5 Optimization problem 9.3 with logistic model for w is nonconvex.

Proof: Inserting the logistic function into (9.46) we get the following solution.

lw,ilw,ilw,ir2=exp(yiwTxi)1+exp(yiwTx)(log(1+exp(yiwTx))exp(yiwTxi))
(9.57)

The first term of (9.57) is always positive, the difference term is always negative, thus optimization problem 9.3 with logistic model for w is nonconvex.▪

Empirically, we find that it is a good choice to select the parameters of a regular i.i.d. logistic regression classifier as starting point for the Newton gradient search. Since i.i.d. logistic regression has a convex optimization criterion, this starting point is easily found.

One can easily show that optimization problem 9.3 is nonconvex when w are chosen as hinge loss or quadratic loss.

Corollary 9.6 Optimization problem 9.3 with exponential model for w is convex.

(p.174) Proof: Inserting the exponential model into the above criterion results in the nonnegative expression

lw,ilw,ilw,ir2=exp(yiwTx)exp(yiwTx)(exp(yiwTx)2)=0.
(9.58)

This means the global optimum of optimization problem 9.3 with exponential model for w can easily be found by Newton gradient descent.

9.9 Empirical Results

We study the benefit of two versions of the integrated classifier for covariate shift and other reference methods on spam filtering, text classification, and land mine detection problems. The first integrated classifier uses a logistic model for w (“integrated log model”), the second an exponential model for w (“integrated exp model”).

The first baseline is a classifier trained under i.i.d. assumption with logistic w. All other reference methods consist of a two-stage procedure: first, the difference between training and test distribution is estimated; the classifier is trained on weighted data in a second step. The baselines differ in the first stage; the second stage is based on a logistic regression classifier with weighted examples in any case.

The first reference method is two-stage logistic regression (“two-stage LR”). The example weights are computed according to (9.8); p(σ = 1|x,v) is estimated by training a logistic regression that discriminates training from test examples. The second method is kernel mean matching (chapter 8); we set  = ntr  1/ntr as proposed by the authors. In the third method, separate density estimates for p(x|λ) and p(x|θ) are obtained using kernel density estimation [Shimodaira, 2000]; the bandwidth of the kernel is chosen according to the rule of thumb of Silverman [1986]. We tune the regularization parameters of all the methods and the variance parameter of the RBF kernels on a separate tuning set. We use a maximum likelihood estimate of ntrnte for p(σ=1|θ,λ)p(σ=1|θ,λ).

We use the spam filtering data of Bickel et al. [2007]; the collection contains nine different inboxes with test emails (5270-10964 emails, depending on inbox) and one set of training emails from different sources. We use a fixed set of 1000 emails as training data. We randomly select 32-2048 emails from one of the original inboxes. We repeat this process ten times for 2048 test emails and 20-640 times for 1024-32 test emails. As tuning data we use the labeled emails from an additional inbox different from the test inboxes. The performance measure is the rate by which the 1-AUC risk is reduced over the i.i.d. baseline [Bickel and Scheffer, 2007]; it is computed as 1  1AUC1AUCiid. We use linear kernels for all methods. We analyze the rank of the kernel matrix and find that it fulfills the universal kernel requirement of kernel mean matching; this is due to the high dimensionality of the data.

Figure 9.1 (top left) shows the result for various numbers of unlabeled examples. The results for a specific number of unlabeled examples are averaged over 10-640 (p.175) random test samples and averaged over all nine inboxes. Averaged over all users and inbox sizes the absolute AUC of the i.i.d. classifier is 0.994. Error bars indicate standard errors of the 1-AUC risk.

The three discriminative density estimators and kernel mean matching perform similarly well. The differences from the i.i.d. baseline are highly significant. For 1048 examples the 1-AUC risk is even reduced by an average of 30% with the integrated exponential model classifier! The kernel density estimation procedure is not able to beat the i.i.d. baseline.

We now study text classification using computer science papers from the Cora dataset. The task is to discriminate machine learning from networking papers. We select 812 papers written before 1996 from both classes as training examples and 1285 papers written after 1996 as test examples. For parameter tuning we apply an additional time split on the training data; we train on the papers written before 1995 and tune on papers written 1995. Title and abstract are transformed into TFIDF vectors; the number of distinct words is 40,000. We again use linear kernels (rank analysis verifies the universal kernel property) and average the results over 20-640 random test samples for different sizes (1024-32) of test sets. The resulting 1-AUC risk is shown in figure 9.1 (top right). The average absolute AUC of the i.i.d. classifier is 0.998. The methods based on discriminative density estimates significantly outperform all other methods. Kernel mean matching is not displayed because its average performance lies far below the i.i.d. baseline. The integrated models reduce the 1-AUC risk by 15% for 1024 test examples; for a larger number of test examples (128-1024) they perform slightly better than the two-step decomposition.

In a third set of experiments we study the problem of detecting land mines using the data set of Xue et al. [2007]. The collection contains data of 29 minefields in different regions. Binary labels (land mine or safe ground) and nine dimensional feature vectors extracted from radar images are provided. There are about 500 examples for each minefield. Each of the fields has a distinct distribution of input patterns, varying from highly foliated to desert areas.

We enumerate all 29 × 28 pairs of minefields, using one field as training, and the other as test data. For tuning we hold out 4 of the 812 pairs. Results are increases over the i.i.d. baseline, averaged over all 29 × 28 − 4 combinations. We use RBF kernels with variance σ2 = 0.3 for all methods. The results are displayed in figure 9.1 (bottom left). The average absolute AUC of the i.i.d. baseline is 0.64 with a standard deviation of 0.07; note that the error bars are much smaller than the absolute standard deviation because they indicate the standard error of the differences from the i.i.d. baseline.

For this problem, the integrated exponential model classifier and kernel mean matching significantly outperform all other methods on average. Integrated logistic regression and two-stage logistic regression are still significantly better than the i.i.d. baseline except for 32 test examples. We assume that the nonconvex integrated logistic regression is inferior to the convex integrated exponential model method because it runs into unfavorable local optima. (p.176)

Discriminative Learning under Covariate Shift with a Single Optimization Problem

Figure 9.1 Average reduction of 1-AUC risk over nine users for spam filtering (top left) and Cora Machine learning/networking classification before and after 1996 (top right) and average increase of AUC for land mine detection over 812 pairs of minefields (bottom left) depending on the number of unlabeled test examples.

9.10 Conclusion

We derived a discriminative model for learning under differing training and test distributions. The contribution of each training instance to the optimization problem ideally needs to be weighted with their test-to-training density ratio. We show that this ratio can be expressed – without modeling either training or test density - by a discriminative model that characterizes how much more likely an instance is to occur in the test sample than it is to occur in the training sample.

When Bayesian model averaging is unfeasible and the Bayes decision is unattainable, then one can choose the joint MAP hypothesis of both the parameters of the test-to-training model and the final classifier. Optimizing these dependent parameters sequentially incurs an additional approximation compared to solving the joint optimization problem.

We derived a primal and a kernelized Newton gradient descent procedure for the joint optimization problem. Theorem 9.4 specifies the condition for the convexity of optimization problem 9.3. Checking the condition using popular loss functions as models of the negative log-likelihoods reveals that optimization problem 9.3 is only convex with exponential loss.

(p.177) Empirically, we found that the models with discriminative density estimates outperform the i.i.d. baseline and the kernel density estimated model in almost all cases. For spam filtering the integrated and the two-step models perform similarly well. For land mine detection the convex integrated exponential model classifier and kernel mean matching significantly outperform all other methods.

Acknowledgment

We gratefully acknowledge support from the German Science Foundation DFG and from STRATO AG. We wish to thank Jiayuan Huang, Alex Smola, Arthur Gretton, Karsten Borgward, and Bernhard Schölkopf who provided their implementation of the kernel mean matching algorithm.