Domain Invariant Transfer Kernel Learning Abstract: Domain

Domain Invariant Transfer Kernel Learning
Domain transfer learning generalizes a learning model across
training data and testing data with different distributions. A general
principle to tackle this problem is reducing the distribution difference
between training data and testing data such that the generalization error
can be bounded. Current methods typically model the sample distributions
in input feature space, which depends on nonlinear feature mapping to
embody the distribution discrepancy. However, this nonlinear feature
space may not be optimal for the kernel-based learning machines. To this
end, we propose a transfer kernel learning (TKL) approach to learn a
domain-invariant kernel by directly matching source and target
distributions in the reproducing kernel Hilbert space (RKHS). Specifically,
we design a family of spectral kernels by extrapolating target eigensystem
on source samples with Mercer's theorem. The spectral kernel minimizing
the approximation error to the ground truth kernel is selected to construct
domain-invariant kernel machines. Comprehensive experimental evidence
on a large number of text categorization, image classification, and video
event recognition datasets verifies the effectiveness and efficiency of the
proposed TKL approach over several state-of-the-art methods.