Algorithmic Stability and Learning on Manifolds The talk consists of

advertisement
Algorithmic Stability and Learning on Manifolds
Partha Niyogi
University of Chicago
The talk consists of two parts: in the first part, we review the notion of algorithmic
stability to obtain bounds on generalization error using training error estimates. We
introduce the new notion of training stability that is sufficient for tight concentration
bounds in general and is both necessary and sufficient for PAC learning.
In the second part, we consider several algorithms for which the notion of algorithmic
stability seems useful. In particular, we consider problems of clustering, classification,
and regression in a setting where the data lies on a low-dimensional Riemannian manifold
embedded in a high dimensional ambient (Euclidean) space.
Download