文档介绍:: .
vince under grant 2007ABA139.
1
data is obtained by an underlying manifold embedded in a high dimension space. Recent
approaches to dimensionality reduction, feature selection and classification belong to this
setting (see [4, 26]). Recently, Belkin et al. in [5] propose a new framework, which exploits
the geometry of the probability distribution that generates the data and incorporates it
as an additional regularization term.
While various ideas have been proposed based on different intuitions, only recently
there are theoretical studies trying to understand why these methods work. For semi-
supervised classification problems, under cluster assumptions generalization error bounds
are given in [18]. For the graph based semi-supervised learning, in [16, 17], Zhang
present learning bounds and derive optimal kernel representation by minimizing the
bound. In [3], using the notion of algorithmic stability, the generalization error bounds
related to structural invariants of the graph are established. In [19], using grouping
information from unlabeled data and the concept of margins, a novel large margin semi-
supervised learning methodology is introduced. In addition, the generalization error of
margin method using both labeled and unlabeled data is estimated.
Despite progress, many open problems remain. Thus, inspired by the error analysis in
hypercube in [14], we further consider the generalization performance of a new graph-based
classification algorithm, which will introduce in the following section. In this paper, we try
to bring together three distinct concepts that have received some independent attention
recently in machine learning: regularization learning on graph proposed in [1, 3, 7], error
analysis in RKHS (see [8, 11, 23]), concentration inequa